You might want to keep a salt shaker with you while reading a June 7, 2016 essay by Matteo Palma (Queen Mary’s University of London) about nanotechnology and DNA on The Conversation website (h/t June 7, 2016 news item on Nanowerk).
This is not a ‘hype’ piece as Palma backs every claim with links to the research while providing a good overview of some very exciting work but the mood is a bit euphoric so you may want to keep the earlier mentioned salt shaker nearby.
Palma offers a very nice beginner introduction especially helpful for someone who only half-remembers their high school biology (from the June 7, 2016 essay)
DNA is one of the most amazing molecules in nature, providing a way to carry the instructions needed to create almost any lifeform on Earth in a microscopic package. Now scientists are finding ways to push DNA even further, using it not just to store information but to create physical components in a range of biological machines.
Deoxyribonucleic acid or “DNA” carries the genetic information that we, and all living organisms, use to function. It typically comes in the form of the famous double-helix shape, made up of two single-stranded DNA molecules folded into a spiral. Each of these is made up of a series of four different types of molecular component: adenine (A), guanine (G), thymine (T), and cytosine (C).
Genes are made up from different sequences of these building block components, and the order in which they appear in a strand of DNA is what encodes genetic information. But by precisely designing different A,G,T and C sequences, scientists have recently been able to develop new ways of folding DNA into different origami shapes, beyond the conventional double helix.
This approach has opened up new possibilities of using DNA beyond its genetic and biological purpose, turning it into a Lego-like material for building objects that are just a few billionths of a metre in diameter (nanoscale). DNA-based materials are now being used for a variety of applications, ranging from templates for electronic nano-devices, to ways of precisely carrying drugs to diseased cells.
He highlights some Canadian work,
Designing electronic devices that are just nanometres in size opens up all sorts of possible applications but makes it harder to spot defects. As a way of dealing with this, researchers at the University of Montreal have used DNA to create ultrasensitive nanoscale thermometers that could help find minuscule hotspots in nanodevices (which would indicate a defect). They could also be used to monitor the temperature inside living cells.
The nanothermometers are made using loops of DNA that act as switches, folding or unfolding in response to temperature changes. This movement can be detected by attaching optical probes to the DNA. The researchers now want to build these nanothermometers into larger DNA devices that can work inside the human body.
He also mentions the nanobots that will heal your body (according to many works of fiction),
Researchers at Harvard Medical School have used DNA to design and build a nanosized robot that acts as a drug delivery vehicle to target specific cells. The nanorobot comes in the form of an open barrel made of DNA, whose two halves are connected by a hinge held shut by special DNA handles. These handles can recognise combinations of specific proteins present on the surface of cells, including ones associated with diseases.
When the robot comes into contact with the right cells, it opens the container and delivers its cargo. When applied to a mixture of healthy and cancerous human blood cells, these robots showed the ability to target and kill half of the cancer cells, while the healthy cells were left unharmed.
Palma is describing a very exciting development and there are many teams worldwide working on ways to make drugs more effective and less side effect-ridden. However there does seem to be a bit of a problem with targeted drug delivery as noted in my April 27, 2016 posting,
According to an April 27, 2016 news item on Nanowerk researchers at the University of Toronto (Canada) along with their collaborators in the US (Harvard Medical School) and Japan (University of Tokyo) have determined that less than 1% of nanoparticle-based drugs reach their intended destination …
Less than 1%? Admittedly, nanoparticles are not the same as nanobots but the problem is in the delivery, from my April 27, 2016 posting,
… the authors argue that, in order to increase nanoparticle delivery efficiency, a systematic and coordinated long-term strategy is necessary. To build a strong foundation for the field of cancer nanomedicine, researchers will need to understand a lot more about the interactions between nanoparticles and the body’s various organs than they do today. …
I imagine nanobots will suffer a similar fate since the actual delivery mechanism to a targeted cell is still a mystery.
I quite enjoyed Palma’s essay and appreciated the links he provided. My only proviso, keep a salt shaker nearby. That rosy future is going take a while to get here.
How do you feel about scientists and interested parties meeting behind closed doors at an invitation-only meeting to discuss creating a second human genome, a synthetic one? The meeting has caused a bit of a stir generating a May 13, 2016 article by Andrew Pollack for the New York Times (NYT) and blog postings including Andrew Balmer’s May 18, 2016 posting for the Guardian. There’s also a measured and somewhat sympathetic account of what happened by Jeff Bessen in a May 24, 2015 essay for The Conversation (h/t phys.org).
Starting at the beginning, the May 13, 2016 article by Pollack gives an overview of what has caused the consternation,
Scientists are now contemplating the fabrication of a human genome, meaning they would use chemicals to manufacture all the DNA contained in human chromosomes.
The prospect is spurring both intrigue and concern in the life sciences community because it might be possible, such as through cloning, to use a synthetic genome to create human beings without biological parents.
While the project is still in the idea phase [emphasis mine], and also involves efforts to improve DNA synthesis in general, it was discussed at a closed-door meeting on Tuesday [May 10, 2016] at Harvard Medical School in Boston. The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting.
Organizers said the project could have a big scientific payoff and would be a follow-up to the original Human Genome Project, which was aimed at reading the sequence of the three billion chemical letters in the DNA blueprint of human life. The new project, by contrast, would involve not reading, but rather writing the human genome — synthesizing all three billion units from chemicals.
Secrecy has long been a part of scientific and innovation practices. For instance, research on nuclear, biological or chemical weapons is often conducted in secret. In his excellent book on Secrecy and Science, Brian Balmer [relation to Andrew?] describes how the Manhattan Project epitomised the way in which scientific secrecy operates, explaining how specific sites were kept secret, but also how projects were compartmentalised, so that knowledge was exchanged only on a ‘need-to-know’ basis, meaning that only a very few people had any real understanding of the programme as a whole. In other words, attempts to maintain secrecy often go hand-in-hand with imperatives of efficiency, security, bureaucracy and control.
By their nature, it is often the most controversial, risky and ethically dubious research programmes that are conducted in secret, curtained-off from society in order to protect knowledge and technology not only from public scrutiny but also espionage or corporate theft. …
As Pollack notes in his NYT article, this is at the idea stage (i.e., it is unfunded) and one of the organizers claims that people have gotten the wrong idea about the project,
George Church, a professor of genetics at Harvard Medical School and an organizer of the proposed project, said there had been a misunderstanding. The project was not aimed at creating people, just cells, and would not be restricted to human genomes, he said. Rather it would aim to improve the ability to synthesize DNA in general, which could be applied to various animals, plants and microbes.
“They’re painting a picture which I don’t think represents the project,” Dr. Church said in an interview.
He said the meeting was closed to the news media, and people were asked not to tweet because the project organizers, in an attempt to be transparent, had submitted a paper to a scientific journal. They were therefore not supposed to discuss the idea publicly before publication. He and other organizers said ethical aspects have been amply discussed since the beginning.
Balmer explores reasons why a synthetic genome might hold appeal for scientists and notes a pitfall with current communication strategies (Note: A link has been removed),
Such a second world might be quite appealing to some researchers, representing a space in which they could run wild with their ideas without the worry of public ears overhearing. Synthetic biologists, for the most part, expect that the public is going to be scared of developments in the field, leading to what has been termed ‘synbiophobia phobia’ – the fear that the public will fear their work. This could well be at the root of the decision to hold the meeting in private, as the organisers had likely anticipated public fear at the potential of creating a human genome from scratch. But it also seems to have been a fear of the media that resulted in the curtains being pulled closed, with the invite reading, “We intentionally did not invite the media, because we want everyone to speak freely and candidly without concerns about being misquoted or misinterpreted as the discussions evolve.”
In fact, there was unintended consequence (from Balmer’s post), Note: A link has been removed,
… whatever the motivations were of those convening the closed-doors, invite-only meeting, the effect of the apparent concealment has been to worry people, even those who support synthetic biology in general. In fact, one of its most well-known advocates, Drew Endy, refused to attend and co-authored an open letter criticising the closed meeting.
…
As Endy and Laurie Zoloth’s letter argued, “The creation of new human life is one of the last human-associated processes that has not yet been industrialized or fully commodified. It remains an act of faith, joy, and hope. Discussions to synthesize, for the first time, a human genome should not occur in closed rooms.”
…
Balmer goes on to note that this secrecy has invited exactly the response the organizers feared.
Pollack’s article, which delves into the synthesis of various genomes at more length, ends with this,
Jeremy Minshull, chief executive of DNA2.0, a DNA synthesis company, questioned if the effort would be worth it.
“Our ability to understand what to build is so far behind what we can build,” said Dr. Minshull, who was invited to the meeting at Harvard but did not attend. “I just don’t think that being able to make more and more and more and cheaper and cheaper and cheaper is going to get us the understanding we need.”
Questioning whether the effort is worth it makes the reference to the Manhattan Project poignant. The Manhattan Project’s atomic bomb did not win the war against Japan. The country was ready to surrender the time the two bombs were dropped as Gar Alperovitz writes in his May 11, 2016 essay, We didn’t need to drop the bomb — and even our WW II military icons knew it, on Salon.com. In fact, those military icons argued against the bombing. Had there been less secrecy, it’s tempting to think we might have developed nuclear power somewhat differently.
I’m not unsympathetic to the organizers. They wanted some quiet time to develop ideas without critical scrutiny. It could be described as brainstorming or a creative process. As everyone knows successful brainstorming and creative processes require an uncritical environment at the beginning. The winnowing/critical process follows.
Bessen’s May 24, 2016 essay focuses on the relationship between scientists and journalists (who got this all wrong) and gives a sympathetic view of the original purpose for the meeting which was to share a paper whose embargo had been extended, making that impossible and putting the organizers in the position of having to scramble at the last minute,
Three weeks later, the exact details of what happened are still being contested. I’m a researcher in synthetic biology, and I learned of the project from reading the newspaper. I reached out to the meeting’s organizers, who – for reasons I’ll explain – declined to comment for this article. But in conversations with meeting invitees, as well as some critics, I’ve found that much of the press coverage was misleading, and says more about the relationship between journalists and scientists than the meeting itself.
What really happened behind closed doors when over 130 scientists, industry leaders and ethicists convened to talk about synthesizing a human genome? How did these sessions end up so widely misunderstood by the media and the public?
…
Those invited say the organizers hoped to inspire scientists and the public with a new grand challenge project: to advance from reading genomes to writing them, by manufacturing them from individual DNA building blocks. In an invitation dated March 30, the hosts proposed a bold collaborative effort to “synthesize a complete human genome within a cell line.” Panels tackled whether such an effort is worthwhile, as well as the ethical, technological and economic challenges.
The conversation was not intended to be restricted. The meeting organizers – Harvard geneticist George Church; New York University systems geneticist Jef Boeke; Andrew Hessel, of the Bio/Nano research group at Autodesk, Inc.; and Nancy J. Kelley, a lawyer specializing in biotechnology consulting – had plans to engage the broader scientific community, as well as industry, policy makers and the public. They made a video recording of the entire meeting, originally intended to be live-streamed over the Internet. They planned to apply for federal funding, which would invite regulatory oversight. And they submitted a white paper to a major peer-reviewed journal explaining the scientific, technological and ethical aspects of the project.
But the publication of the paper was delayed – editors commonly ask for revisions as part of the peer review process and Dr. Church told STAT News they wanted “more information about the ethical, social, and legal components of synthesizing genomes” included. (As of this writing, the paper has not yet come out.) The organizers are prohibited from discussing the paper in public until it is published – a common journal policy known as an embargo. In deference to the embargo, they declined to comment in detail for this article.
While Bessen is definitely sympathetic to the organizers, he does have some issues with what transpired but he saves some of his last words for a discussion of social media and traditional science publishing before finishing with a plea for balance,
The episode also points to an emerging conflict between social media and traditional science publishing. Research journals move at a glacial pace; nearly all of my colleagues have at at one point waited six months or more to publish. Will the long publication cycle and the normally obscure embargo policy be able to adjust to an era when scientific discussions happen at the speed of Twitter?
Researchers must rely on journalists for their communication skills and the audience they reach. And journalists will play a crucial role in facilitating the ethical discussion around synthetic biology – one whose stakeholders include scientists as well as ethicists, policy makers and the broader public – and what the goals and action items of such a debate will be. Critically, a balance must be struck between the watchdog role of the press and the legitimate needs of any profession to carry out some of their discussions in private. [emphases mine]
Despite the similarities between my conclusion and Besson’s, I drafted this post on May 18, 2016 and did not see Besson’s piece until this morning, May 25, 2016. In all probability we not the only two coming to the same conclusion.
There is a fascinating set of stories about bioart designed to whet your appetite for more (*) in a Nov. 23, 2015 Cell Press news release on EurekAlert (Note: A link has been removed),
Joe Davis is an artist who works not only with paints or pastels, but also with genes and bacteria. In 1986, he collaborated with geneticist Dan Boyd to encode a symbol for life and femininity into an E. coli bacterium. The piece, called Microvenus, was the first artwork to use the tools and techniques of molecular biology. Since then, bioart has become one of several contemporary art forms (including reclamation art and nanoart) that apply scientific methods and technology to explore living systems as artistic subjects. A review of the field, published November 23, can be found in Trends in Biotechnology.
Bioart ranges from bacterial manipulation to glowing rabbits, cellular sculptures, and–in the case of Australian-British artist Nina Sellars–documentation of an ear prosthetic that was implanted onto fellow artist Stelarc’s arm. In the pursuit of creating art, practitioners have generated tools and techniques that have aided researchers, while sometimes crossing into controversy, such as by releasing invasive species into the environment, blurring the lines between art and modern biology, raising philosophical, societal, and environmental issues that challenge scientific thinking.
“Most people don’t know that bioart exists, but it can enable scientists to produce new ideas and give us opportunities to look differently at problems,” says author Ali K. Yetisen, who works at Harvard Medical School and the Wellman Center for Photomedicine, Massachusetts General Hospital. “At the same time there’s been a lot of ethical and safety concerns happening around bioart and artists who wanted to get involved in the past have made mistakes.”
Here’s a sample of Joe Davis’s work,
This photograph shows polyptich paintings by Joe Davis of his 28-mer Microvenus DNA molecule (2006 Exhibition in Greece at Athens School of Fine Arts). Credit: Courtesy of Joe Davis
The news release goes on to recount a brief history of bioart, which stretches back to 1928 and then further back into the 19th and 18th centuries,
In between experiments, Alexander Fleming would paint stick figures and landscapes on paper and in Petri dishes using bacteria. In 1928, after taking a brief hiatus from the lab, he noticed that portions of his “germ paintings,” had been killed. The culprit was a fungus, penicillin–a discovery that would revolutionize medicine for decades to come.
In 1938, photographer Edward Steichen used a chemical to genetically alter and produce interesting variations in flowering delphiniums. This chemical, colchicine, would later be used by horticulturalists to produce desirable mutations in crops and ornamental plants.
In the late 18th and early 19th centuries, the arts and sciences moved away from traditionally shared interests and formed secular divisions that persisted well into the 20th century. “Appearance of environmental art in the 1970s brought about renewed awareness of special relationships between art and the natural world,” Yetisen says.
To demonstrate how we change landscapes, American sculptor Robert Smithsonian paved a hillside with asphalt, while Bulgarian artist Christo Javacheffa (of Christo and Jeanne-Claude) surrounded resurfaced barrier islands with bright pink plastic.
These pieces could sometimes be destructive, however, such as in Ten Turtles Set Free by German-born Hans Haacke. To draw attention to the excesses of the pet trade, he released what he thought were endangered tortoises back to their natural habitat in France, but he inadvertently released the wrong subspecies, thus compromising the genetic lineages of the endangered tortoises as the two varieties began to mate.
By the late 1900s, technological advances began to draw artists’ attention to biology, and by the 2000s, it began to take shape as an artistic identity. Following Joe Davis’ transgenic Microvenus came a miniaturized leather jacket made of skin cells, part of the Tissue Culture & Art Project (initiated in 1996) by duo Oran Catts and Ionat Zurr. Other examples of bioart include: the use of mutant cacti to simulate appearance of human hair in the place of cactus spines by Laura Cinti of University College London’s C-Lab; modification of butterfly wings for artistic purposes by Marta de Menezes of Portugal; and photographs of amphibian deformation by American Brandon Ballengée.
“Bioart encourages discussions about societal, philosophical, and environmental issues and can help enhance public understanding of advances in biotechnology and genetic engineering,” says co-author Ahmet F. Coskun, who works in the Division of Chemistry and Chemical Engineering at California Institute of Technology.
Life as a Bioartist
Today, Joe Davis is a research affiliate at MIT Biology and “Artist-Scientist” at the George Church Laboratory at Harvard–a place that fosters creativity and technological development around genetic engineering and synthetic biology. “It’s Oz, pure and simple,” Davis says. “The total amount of resources in this environment and the minds that are accessible, it’s like I come to the city of Oz every day.”
But it’s not a one-way street. “My particular lab depends on thinking outside the box and not dismissing things because they sound like science fiction,” says [George M.] Church, who is also part of the Wyss Institute for Biologically Inspired Engineering. “Joe is terrific at keeping us flexible and nimble in that regard.”
For example, Davis is working with several members of the Church lab to perform metagenomics analyses of the dust that accumulates at the bottom of money-counting machines. Another project involves genetically engineering silk worms to spin metallic gold–an homage to the fairy tale of Rumpelstiltskin.
“I collaborate with many colleagues on projects that don’t necessarily have direct scientific results, but they’re excited to pursue these avenues of inquiry that they might not or would not look into ordinarily–they might try to hide it, but a lot of scientists have poetic souls,” Davis says. “Art, like science, has to describe the whole word and you can’t describe something you’re basically clueless about. The most exciting part of these activities is satiating overwhelming curiosity about everything around you.”
The number of bioartists is still small, Davis says, partly because of a lack of federal funding of the arts in general. Accessibility to the types of equipment bioartists want to experiment with can also be an issue. While Davis has partnered with labs over the past few decades, other artists affiliate themselves with community access laboratories that are run by do-it-yourself biologists. One way that universities can help is to create departmental-wide positions for bioartists to collaborate with scientists.
“In the past, there have been artists affiliated with departments in a very utilitarian way to produce figures or illustrations,” Church says. “Having someone like Joe stimulates our lab to come together in new ways and if we had more bioartists, I think thinking out of the box would be a more common thing.”
“In the era of genetic engineering, bioart will gain new meanings and annotations in social and scientific contexts,” says Yetisen. “Bioartists will surely take up new roles in science laboratories, but this will be subject to ethical criticism and controversy as a matter of course.”
Here’s a link to and a citation for the paper,
Bioart by Ali K. Yetisen, Joe Davis, Ahmet F. Coskun, George M. Church, Seok Hyun. Trends in Biotechnology, DOI: http://dx.doi.org/10.1016/j.tibtech.2015.09.011 Published Online: November 23, 2015
This paper appears to be open access.
*Removed the word ‘featured’ on Dec. 1, 2015 at 1030 hours PDT.
A Sept. 17, 2015 news item on Nanotechnology Now makes note of an article where experts review the state of the synthetic biology field and discuss the need for safety as synthetic biology is poised to move from the laboratory into the real world,
Targeted cancer treatments, toxicity sensors and living factories: synthetic biology has the potential to revolutionize science and medicine. But before the technology is ready for real-world applications, more attention needs to be paid to its safety and stability, say experts in a review article published in Current Opinion in Chemical Biology.
Synthetic biology involves engineering microbes like bacteria to program them to behave in certain ways. For example, bacteria can be engineered to glow when they detect certain molecules, and can be turned into tiny factories to produce chemicals.
Synthetic biology has now reached a stage where it’s ready to move out of the lab and into the real world, to be used in patients and in the field. According to Professor Pamela Silver, one of the authors of the article from Harvard Medical School in the US, this move means researchers should increase focus on the safety of engineered microbes in biological systems like the human body.
“Historically, molecular biologists engineered microbes as industrial organisms to produce different molecules,” said Professor Silver. “The more we discovered about microbes, the easier it was to program them. We’ve now reached a very exciting phase in synthetic biology where we’re ready to apply what we’ve developed in the real world, and this is where safety is vital.”
Microbes have an impact on health; the way they interact with animals is being ever more revealed by microbiome research – studies on all the microbes that live in the body – and this is making them easier and faster to engineer. Scientists are now able to synthesize whole genomes, making it technically possible to build a microbe from scratch.
“Ultimately, this is the future – this will be the way we program microbes and other cell types,” said Dr. Silver. “Microbes have small genomes, so they’re not too complex to build from scratch. That gives us huge opportunities to design them to do specific jobs, and we can also program in safety mechanisms.”
One of the big safety issues associated with engineering microbial genomes is the transfer of their genes to wild microbes. Microbes are able to transfer segments of their DNA during reproduction, which leads to genetic evolution. One key challenge associated with synthetic biology is preventing this transfer between the engineered genome and wild microbial genomes.
There are already several levels of safety infrastructure in place to ensure no unethical research is done, and the kinds of organisms that are allowed in laboratories. The focus now, according to Dr. Silver, is on technology to ensure safety. When scientists build synthetic microbes, they can program in mechanisms called kill switches that cause the microbes to self-destruct if their environment changes in certain ways.
Microbial sensors and drug delivery systems can be shown to work in the lab, but researchers are not yet sure how they will function in a human body or a large-scale bioreactor. Engineered organisms have huge potential, but they will only be useful if proven to be reliable, predictable, and cost effective. Today, engineered bacteria are already in clinical trials for cancer, and this is just the beginning, says Dr. Silver.
“The rate at which this field is moving forward is incredible. I don’t know what happened – maybe it’s the media coverage, maybe the charisma – but we’re on the verge of something very exciting. Once we’ve figured out how to make genomes more quickly and easily, synthetic biology will change the way we work as researchers, and even the way we treat diseases.”
Lucy Goodchild van Hilten has written a Sept. 16, 2015 article for Elsevier abut this paper,
In January, the UK government announced a funding injection of £40 million to boost synthetic biology research, adding three new Synthetic Biology Research Centres (SBRCs) in Manchester, Edinburgh and Warwick. The additional funding takes the UK’s total public spending on synthetic biology to £200 million – an investment that hints at the commercial potential of synthetic biology.
In fact, according to the authors of a new review published in Current Opinion in Chemical Biology, synthetic biology has the potential to revolutionize science and medicine. …
I believe this paper is open access until January 16, 2016.
As the paper has a nice introductory description of synthetic biology, I thought I’d include it here, as well as, the conclusion which is not as safety-oriented as I expected,
Synthetic biology allows scientists to re-program interactions between genes, proteins, and small molecules. One of the goals of synthetic biology is to produce organisms that predictably carry out desired functions and thereby perform as well-controlled so-called biological devices. Together, synthetic and chemical biology can provide increased control over biological systems by changing the ways these systems respond to and produce chemical stimuli. Sensors, which detect small molecules and direct later cellular function, provide the basis for chemical control over biological systems. The techniques of synthetic biology and metabolic engineering can link sensors to metabolic processes and proteins with many different activities. In this review we stratify the activities affected by sensors to three different levels: sensor-reporters that provide a simple read-out of small molecule levels, sensor-effectors that alter the behavior of single organisms in response to small molecules, and sensor effectors that coordinate the activities of multiple organisms in response to small molecules …
…
Conclusion
We have come to the point in synthetic biology where there are many lab-scale or proof-of-concept examples of chemically controlled systems useful to sense small molecules, treat disease, and produce commercially useful compounds. These systems have great potential, but more attention needs to be paid to their stability, efficacy, and safety. Being that the sensor-effectors discussed above function in living, evolving organisms, it is unclear how well they will retain function when distributed in a patient or in a large-scale bioreactor. Future efforts should focus on developing these sensor-effectors for real-world application. Engineered organisms will only be useful if we can prove that their functions are reliable, predictable, and cost effective.
I’ve decided to do a roundup of the various brain-related projects I’ve been coming across in the last several months. I was inspired by this article (Real-life Jedi: Pushing the limits of mind control) by Katia Moskvitch,
You don’t have to be a Jedi to make things move with your mind.
Granted, we may not be able to lift a spaceship out of a swamp like Yoda does in The Empire Strikes Back, but it is possible to steer a model car, drive a wheelchair and control a robotic exoskeleton with just your thoughts.
…
We are standing in a testing room at IBM’s Emerging Technologies lab in Winchester, England.
On my head is a strange headset that looks like a black plastic squid. Its 14 tendrils, each capped with a moistened electrode, are supposed to detect specific brain signals.
In front of us is a computer screen, displaying an image of a floating cube.
As I think about pushing it, the cube responds by drifting into the distance.
Moskvitch goes on to discuss a number of projects that translate thought into movement via various pieces of equipment before she mentions a project at Brown University (US) where researchers are implanting computer chips into brains,
Headsets and helmets offer cheap, easy-to-use ways of tapping into the mind. But there are other,
Imagine some kind of a wireless computer device in your head that you’ll use for mind control – what if people hacked into that”
…
At Brown Institute for Brain Science in the US, scientists are busy inserting chips right into the human brain.
The technology, dubbed BrainGate, sends mental commands directly to a PC.
Subjects still have to be physically “plugged” into a computer via cables coming out of their heads, in a setup reminiscent of the film The Matrix. However, the team is now working on miniaturising the chips and making them wireless.
The purpose of the first phase of the pilot clinical study of the BrainGate2 Neural Interface System is to obtain preliminary device safety information and to demonstrate the feasibility of people with tetraplegia using the System to control a computer cursor and other assistive devices with their thoughts. Another goal of the study is to determine the participants’ ability to operate communication software, such as e-mail, simply by imagining the movement of their own hand. The study is invasive and requires surgery.
Individuals with limited or no ability to use both hands due to cervical spinal cord injury, brainstem stroke, muscular dystrophy, or amyotrophic lateral sclerosis (ALS) or other motor neuron diseases are being recruited into a clinical study at Massachusetts General Hospital (MGH) and Stanford University Medical Center. Clinical trial participants must live within a three-hour drive of Boston, MA or Palo Alto, CA. Clinical trial sites at other locations may be opened in the future. The study requires a commitment of 13 months.
They have been recruiting since at least November 2011, from the Nov. 14, 2011 news item by Tanya Lewis on MedicalXpress,
Stanford University researchers are enrolling participants in a pioneering study investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms and other assistive devices.
…
The pilot clinical trial, known as BrainGate2, is based on technology developed at Brown University and is led by researchers at Massachusetts General Hospital, Brown and the Providence Veterans Affairs Medical Center. The researchers have now invited the Stanford team to establish the only trial site outside of New England.
Under development since 2002, BrainGate is a combination of hardware and software that directly senses electrical signals in the brain that control movement. The device — a baby-aspirin-sized array of electrodes — is implanted in the cerebral cortex (the outer layer of the brain) and records its signals; computer algorithms then translate the signals into digital instructions that may allow people with paralysis to control external devices.
Confusingly, there seemto be two BrainGate organizations. One appears to be a research entity where a number of institutions collaborate and the other is some sort of jointly held company. From the About Us webpage of the BrainGate research entity,
In the late 1990s, the initial translation of fundamental neuroengineering research from “bench to bedside” – that is, to pilot clinical testing – would require a level of financial commitment ($10s of millions) available only from private sources. In 2002, a Brown University spin-off/startup medical device company, Cyberkinetics, Inc. (later, Cyberkinetics Neurotechnology Systems, Inc.) was formed to collect the regulatory permissions and financial resources required to launch pilot clinical trials of a first-generation neural interface system. The company’s efforts and substantial initial capital investment led to the translation of the preclinical research at Brown University to an initial human device, the BrainGate Neural Interface System [Caution: Investigational Device. Limited by Federal Law to Investigational Use]. The BrainGate system uses a brain-implantable sensor to detect neural signals that are then decoded to provide control signals for assistive technologies. In 2004, Cyberkinetics received from the U.S. Food and Drug Administration (FDA) the first of two Investigational Device Exemptions (IDEs) to perform this research. Hospitals in Rhode Island, Massachusetts, and Illinois were established as clinical sites for the pilot clinical trial run by Cyberkinetics. Four trial participants with tetraplegia (decreased ability to use the arms and legs) were enrolled in the study and further helped to develop the BrainGate device. Initial results from these trials have been published or presented, with additional publications in preparation.
While scientific progress towards the creation of this promising technology has been steady and encouraging, Cyberkinetics’ financial sponsorship of the BrainGate research – without which the research could not have been started – began to wane. In 2007, in response to business pressures and changes in the capital markets, Cyberkinetics turned its focus to other medical devices. Although Cyberkinetics’ own funds became unavailable for BrainGate research, the research continued through grants and subcontracts from federal sources. By early 2008 it became clear that Cyberkinetics would eventually need to withdraw completely from directing the pilot clinical trials of the BrainGate device. Also in 2008, Cyberkinetics spun off its device manufacturing to new ownership, BlackRock Microsystems, Inc., which now produces and is further developing research products as well as clinically-validated (510(k)-cleared) implantable neural recording devices.
Beginning in mid 2008, with the agreement of Cyberkinetics, a new, fully academically-based IDE application (for the “BrainGate2 Neural Interface System”) was developed to continue this important research. In May 2009, the FDA provided a new IDE for the BrainGate2 pilot clinical trial. [Caution: Investigational Device. Limited by Federal Law to Investigational Use.] The BrainGate2 pilot clinical trial is directed by faculty in the Department of Neurology at Massachusetts General Hospital, a teaching affiliate of Harvard Medical School; the research is performed in close scientific collaboration with Brown University’s Department of Neuroscience, School of Engineering, and Brown Institute for Brain Sciences, and the Rehabilitation Research and Development Service of the U.S. Department of Veteran’s Affairs at the Providence VA Medical Center. Additionally, in late 2011, Stanford University joined the BrainGate Research Team as a clinical site and is currently enrolling participants in the clinical trial. This interdisciplinary research team includes scientific partners from the Functional Electrical Stimulation Center at Case Western Reserve University and the Cleveland VA Medical Center. As was true of the decades of fundamental, preclinical research that provided the basis for the recent clinical studies, funding for BrainGate research is now entirely from federal and philanthropic sources.
The BrainGate Research Team at Brown University, Massachusetts General Hospital, Stanford University, and Providence VA Medical Center comprises physicians, scientists, and engineers working together to advance understanding of human brain function and to develop neurotechnologies for people with neurologic disease, injury, or limb loss.
The BrainGate™ Co. is a privately-held firm focused on the advancement of the BrainGate™ Neural Interface System. The Company owns the Intellectual property of the BrainGate™ system as well as new technology being developed by the BrainGate company. In addition, the Company also owns the intellectual property of Cyberkinetics which it purchased in April 2009.
Meanwhile, in Europe there are two projects BrainAble and the Human Brain Project. The BrainAble project is similar to BrainGate in that it is intended for people with injuries but they seem to be concentrating on a helmet or cap for thought transmission (as per Moskovitch’s experience at the beginning of this posting). From the Feb. 28, 2012 news item on Science Daily,
In the 2009 film Surrogates, humans live vicariously through robots while safely remaining in their own homes. That sci-fi future is still a long way off, but recent advances in technology, supported by EU funding, are bringing this technology a step closer to reality in order to give disabled people more autonomy and independence than ever before.
…
“Our aim is to give people with motor disabilities as much autonomy as technology currently allows and in turn greatly improve their quality of life,” says Felip Miralles at Barcelona Digital Technology Centre, a Spanish ICT research centre.
Mr. Miralles is coordinating the BrainAble* project (http://www.brainable.org/), a three-year initiative supported by EUR 2.3 million in funding from the European Commission to develop and integrate a range of different technologies, services and applications into a commercial system for people with motor disabilities.
In terms of HCI [human-computer interface], BrainAble improves both direct and indirect interaction between the user and his smart home. Direct control is upgraded by creating tools that allow controlling inner and outer environments using a “hybrid” Brain Computer Interface (BNCI) systemable to take into account other sources of information such as measures of boredom, confusion, frustration by means of the so-called physiological and affective sensors.
Furthermore, interaction is enhanced by means of Ambient Intelligence (AmI) focused on creating a proactive and context-aware environments by adding intelligence to the user’s surroundings. AmI’s main purpose is to aid and facilitate the user’s living conditions by creating proactive environments to provide assistance.
Human-Computer Interfaces are complemented by an intelligent Virtual Reality-based user interface with avatars and scenarios that will help the disabled move around freely, and interact with any sort of devices. Even more the VR will provide self-expression assets using music, pictures and text, communicate online and offline with other people, play games to counteract cognitive decline, and get trained in new functionalities and tasks.
Perhaps this video helps,
Another European project, NeuroCare, which I discussed in my March 5, 2012 posting, is focused on creating neural implants to replace damaged and/or destroyed sensory cells in the eye or the ear.
The Human Brain Project is, despite its title, a neuromorphic engineering project (although the researchers do mention some medical applications on the project’s home page) in common with the work being done at the University of Michigan/HRL Labs mentioned in my April 19, 2012 posting (A step closer to artificial synapses courtesy of memritors) about that project. From the April 11, 2012 news item about the Human Brain Project on Science Daily,
Researchers at the EPFL [Ecole Polytechnique Fédérale de Lausanne] have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.
The discovery, using state-of-the-art informatics tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it. That in turn makes the Holy Grail of modelling the brain in silico — the goal of the proposed Human Brain Project — a more realistic, less Herculean, prospect. “It is the door that opens to a world of predictive biology,” says Henry Markram, the senior author on the study, which is published this week in PLoS ONE.
Here’s a bit more about the Human Brain Project (from the home page),
Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail. But even in this way, simulating the complete human brain will require a computer a thousand times more powerful than the most powerful machine available today. This means that some of the key players in the Human Brain Project will be specialists in supercomputing. Their task: to work with industry to provide the project with the computing power it will need at each stage of its work.
The Human Brain Project will impact many different areas of society. Brain simulation will provide new insights into the basic causes of neurological diseases such as autism, depression, Parkinson’s, and Alzheimer’s. It will give us new ways of testing drugs and understanding the way they work. It will provide a test platform for new drugs that directly target the causes of disease and that have fewer side effects than current treatments. It will allow us to design prosthetic devices to help people with disabilities. The benefits are potentially huge. As world populations grow older, more than a third will be affected by some kind of brain disease. Brain simulation provides us with a powerful new strategy to tackle the problem.
The project also promises to become a source of new Information Technologies. Unlike the computers of today, the brain has the ability to repair itself, to take decisions, to learn, and to think creatively – all while consuming no more energy than an electric light bulb. The Human Brain Project will bring these capabilities to a new generation of neuromorphic computing devices, with circuitry directly derived from the circuitry of the brain. The new devices will help us to build a new generation of genuinely intelligent robots to help us at work and in our daily lives.
The Human Brain Project builds on the work of the Blue Brain Project. Led by Henry Markram of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Blue Brain Project has already taken an essential first towards simulation of the complete brain. Over the last six years, the project has developed a prototype facility with the tools, know-how and supercomputing technology necessary to build brain models, potentially of any species at any stage in its development. As a proof of concept, the project has successfully built the first ever, detailed model of the neocortical column, one of the brain’s basic building blocks.
The Human Brain Project is a flagship project in contention for the 1B Euro research prize that I’ve mentioned in the context of the GRAPHENE-CA flagship project (my Feb. 13, 2012 posting gives a better description of these flagship projects while mentioned both GRAPHENE-CA and another brain-computer interface project, PRESENCCIA).
Part of the reason for doing this roundup, is the opportunity to look at a number of these projects in one posting; the effect is more overwhelming than I expected.
For anyone who’s interested in Markram’s paper (open access),
Georges Khazen, Sean L. Hill, Felix Schürmann, Henry Markram. Combinatorial Expression Rules of Ion Channel Genes in Juvenile Rat (Rattus norvegicus) Neocortical Neurons. PLoS ONE, 2012; 7 (4): e34786 DOI: 10.1371/journal.pone.0034786
I do have earlier postings on brains and neuroprostheses, one of the more recent ones is this March 16, 2012 posting. Meanwhile, there are new announcements from Northwestern University (US) and the US National Institutes of Health (National Institute of Neurological Disorders and Stroke). From the April 18, 2012 news item (originating from the National Institutes of Health) on Science Daily,
An artificial connection between the brain and muscles can restore complex hand movements in monkeys following paralysis, according to a study funded by the National Institutes of Health.
In a report in the journal Nature, researchers describe how they combined two pieces of technology to create a neuroprosthesis — a device that replaces lost or impaired nervous system function. One piece is a multi-electrode array implanted directly into the brain which serves as a brain-computer interface (BCI). The array allows researchers to detect the activity of about 100 brain cells and decipher the signals that generate arm and hand movements. The second piece is a functional electrical stimulation (FES) device that delivers electrical current to the paralyzed muscles, causing them to contract. The brain array activates the FES device directly, bypassing the spinal cord to allow intentional, brain-controlled muscle contractions and restore movement.
A new Northwestern Medicine brain-machine technology delivers messages from the brain directly to the muscles — bypassing the spinal cord — to enable voluntary and complex movement of a paralyzed hand. The device could eventually be tested on, and perhaps aid, paralyzed patients.
…
The research was done in monkeys, whose electrical brain and muscle signals were recorded by implanted electrodes when they grasped a ball, lifted it and released it into a small tube. Those recordings allowed the researchers to develop an algorithm or “decoder” that enabled them to process the brain signals and predict the patterns of muscle activity when the monkeys wanted to move the ball.
These experiments were performed by Christian Ethier, a post-doctoral fellow, and Emily Oby, a graduate student in neuroscience, both at the Feinberg School of Medicine. The researchers gave the monkeys a local anesthetic to block nerve activity at the elbow, causing temporary, painless paralysis of the hand. With the help of the special devices in the brain and the arm — together called a neuroprosthesis — the monkeys’ brain signals were used to control tiny electric currents delivered in less than 40 milliseconds to their muscles, causing them to contract, and allowing the monkeys to pick up the ball and complete the task nearly as well as they did before.
“The monkey won’t use his hand perfectly, but there is a process of motor learning that we think is very similar to the process you go through when you learn to use a new computer mouse or a different tennis racquet. Things are different and you learn to adjust to them,” said Miller [Lee E. Miller], also a professor of physiology and of physical medicine and rehabilitation at Feinberg and a Sensory Motor Performance Program lab chief at the Rehabilitation Institute of Chicago.
The National Institutes of Health news item supplies a little history and background for this latest breakthrough while the Northwestern University news item offers more technical details more technical details.
You can find the researchers’ paper with this citation (assuming you can get past the paywall,
C. Ethier, E. R. Oby, M. J. Bauman, L. E. Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 2012; DOI: 10.1038/nature10987
I was surprised to find the Health Research Fund of Québec listed as one of the funders but perhaps Christian Ethier has some connection with the province.