Tag Archives: rats

A transatlantic report highlighting the risks and opportunities associated with synthetic biology and bioengineering

I love e-Life, the open access journal where its editors noted that a submitted synthetic biology and bioengineering report was replete with US and UK experts (along with a European or two) but no expert input from other parts of the world. In response the authors added ‘transatlantic’ to the title. It was a good decision since it was too late to add any new experts if the authors planned to have their paper published in the foreseeable future.

I’ve commented many times here when panels of experts include only Canadian, US, UK, and, sometimes, European or Commonwealth (Australia/New Zealand) experts that we need to broaden our perspectives and now I can add: or at least acknowledge (e.g. transatlantic) that the perspectives taken are reflective of a rather narrow range of countries.

Now getting to the report, here’s more from a November 21, 2017 University of Cambridge press release,

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems [emphasis mine], potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Here’s a link to and a citation for the paper,

A transatlantic perspective on 20 emerging issues in biological engineering by Bonnie C Wintle, Christian R Boehm, Catherine Rhodes, Jennifer C Molloy, Piers Millett, Laura Adam, Rainer Breitling, Rob Carlson, Rocco Casagrande, Malcolm Dando, Robert Doubleday, Eric Drexler, Brett Edwards, Tom Ellis, Nicholas G Evans, Richard Hammond, Jim Haseloff, Linda Kahl, Todd Kuiken, Benjamin R Lichman, Colette A Matthewman, Johnathan A Napier, Seán S ÓhÉigeartaigh, Nicola J Patron, Edward Perello, Philip Shapira, Joyce Tait, Eriko Takano, William J Sutherland. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

This paper is open access and the editors have included their notes to the authors and the authors’ response.

You may have noticed that I highlighted a portion of the text concerning synthetic gene drives. Coincidentally I ran across a November 16, 2017 article by Ed Yong for The Atlantic where the topic is discussed within the context of a project in New Zealand, ‘Predator Free 2050’ (Note: A link has been removed),

Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.

Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.

But perhaps, they also represent the New Zealand that could be.

In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.

In 2014, Kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.

Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat’s color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.

Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.

These excerpts don’t do justice to this thought-provoking article. If you have time, I recommend reading it in its entirety  as it provides some insight into gene drives and, with some imagination on the reader’s part, the potential for the other technologies discussed in the report.

One last comment, I notice that Eric Drexler is cited as on the report’s authors. He’s familiar to me as K. Eric Drexler, the author of the book that popularized nanotechnology in the US and other countries, Engines of Creation (1986) .

Brain-to-brain communication, organic computers, and BAM (brain activity map), the connectome

Miguel Nicolelis, a professor at Duke University, has been making international headlines lately with two brain projects. The first one about implanting a brain chip that allows rats to perceive infrared light was mentioned in my Feb. 15, 2013 posting. The latest project is a brain-to-brain (rats) communication project as per a Feb. 28, 2013 news release on *EurekAlert,

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

“Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. “In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'”

Ben Schiller in a Mar. 1, 2013 article for Fast Company describes both the latest experiment and the work leading up to it,

First, two rats were trained to press a lever when a light went on in their cage. Press the right lever, and they would get a reward–a sip of water. The animals were then split in two: one cage had a lever with a light, while another had a lever without a light. When the first rat pressed the lever, the researchers sent electrical activity from its brain to the second rat. It pressed the right lever 70% of the time (more than half).

In another experiment, the rats seemed to collaborate. When the second rat didn’t push the right lever, the first rat was denied a drink. That seemed to encourage the first to improve its signals, raising the second rat’s lever-pushing success rate.

Finally, to show that brain-communication would work at a distance, the researchers put one rat in an cage in North Carolina, and another in Natal, Brazil. Despite noise on the Internet connection, the brain-link worked just as well–the rate at which the second rat pushed the lever was similar to the experiment conducted solely in the U.S.

The Duke University Feb. 28, 2013 news release, the origin for the news release on EurekAlert, provides more specific details about the experiments and the rats’ training,

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the “encoder” animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the “decoder” animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a “behavioral collaboration” between the pair of rats.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” Nicolelis said. “The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Will Oremus in his Feb. 28, 2013 article for Slate seems a little less buoyant about the implications of this work,

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

That sounds far-fetched. But Nicolelis’ lab is developing quite the track record of “taking science fiction and turning it into science,” says Ron Frostig, a neurobiologist at UC-Irvine who was not involved in the rat study. “He’s the most imaginative neuroscientist right now.” (Frostig made it clear he meant this as a complement, though skeptics might interpret the word less charitably.)

The most extensive coverage I’ve given Nicolelis and his work (including the Walk Again project) was in a March 16, 2012 post titled, Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football), although there are other mentions including in this Oct. 6, 2011 posting titled, Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control.  By the way, Nicolelis hopes to have a paraplegic individual (using technology Nicolelis is developing for the Walk Again project) kick the opening soccer/football to the 2014 World Cup games in Brazil.

While there’s much excitement about Nicolelis and his work, there are other ‘brain’ projects being developed in the US including the Brain Activity Map (BAM), which James Lewis notes in his Mar. 1, 2013 posting on the Foresight Institute blog,

A proposal alluded to by President Obama in his State of the Union address [Feb. 2013] to construct a dynamic “functional connectome” Brain Activity Map (BAM) would leverage current progress in neuroscience, synthetic biology, and nanotechnology to develop a map of each firing of every neuron in the human brain—a hundred billion neurons sampled on millisecond time scales. Although not the intended goal of this effort, a project on this scale, if it is funded, should also indirectly advance efforts to develop artificial intelligence and atomically precise manufacturing.

As Lewis notes in his posting, there’s an excellent description of BAM and other brain projects, as well as a discussion about how these ideas are linked (not necessarily by individuals but by the overall direction of work being done in many labs and in many countries across the globe) in Robert Blum’s Feb. (??), 2013 posting titled, BAM: Brain Activity Map Every Spike from Every Neuron, on his eponymous blog. Blum also offers an extensive set of links to the reports and stories about BAM. From Blum’s posting,

The essence of the BAM proposal is to create the technology over the coming decade
to be able to record every spike from every neuron in the brain of a behaving organism.
While this notion seems insanely ambitious, coming from a group of top investigators,
the paper deserves scrutiny. At minimum it shows what might be achieved in the future
by the combination of nanotechnology and neuroscience.

In 2013, as I write this, two European Flagship projects have just received funding for
one billion euro each (1.3 billion dollars each). The Human Brain Project is
an outgrowth of the Blue Brain Project, directed by Prof. Henry Markram
in Lausanne, which seeks to create a detailed simulation of the human brain.
The Graphene Flagship, based in Sweden, will explore uses of graphene for,
among others, creation of nanotech-based supercomputers. The potential synergy
between these projects is a source of great optimism.

The goal of the BAM Project is to elaborate the functional connectome
of a live organism: that is, not only the static (axo-dendritic) connections
but how they function in real-time as thinking and action unfold.

The European Flagship Human Brain Project will create the computational
capability to simulate large, realistic neural networks. But to compare the model
with reality, a real-time, functional, brain-wide connectome must also be created.
Nanotech and neuroscience are mature enough to justify funding this proposal.

I highly recommend reading Blum’s technical description of neural spikes as understanding that concept or any other in his post doesn’t require an advanced degree. Note: Blum holds a number of degrees and diplomas including an MD (neuroscience) from the University of California at San Francisco and a PhD in computer science and biostatistics from California’s Stanford University.

The Human Brain Project has been mentioned here previously. The  most recent mention is in a Jan. 28, 2013 posting about its newly gained status as one of two European Flagship initiatives (the other is the Graphene initiative) each meriting one billion euros of research funding over 10 years. Today, however, is the first time I’ve encountered the BAM project and I’m fascinated. Luckily, John Markoff’s Feb. 17, 2013 article for The New York Times provides some insight into this US initiative (Note: I have removed some links),

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

What I find particularly interesting is the reference back to the human genome project, which may explain why BAM is also referred to as a ‘connectome’.

ETA Mar.6.13: I have found a Human Connectome Project Mar. 6, 2013 news release on EurekAlert, which leaves me confused. This does not seem to be related to BAM, although the articles about BAM did reference a ‘connectome’. At this point, I’m guessing that BAM and the ‘Human Connectome Project’ are two related but different projects and the reference to a ‘connectome’ in the BAM material is meant generically.  I previously mentioned the Human Connectome Project panel discussion held at the AAAS (American Association for the Advancement of Science) 2013 meeting in my Feb. 7, 2013 posting.

* Corrected EurkAlert to EurekAlert on June 14, 2013.

Free the rats, mice, and zebrafish from the labs—replace them with in vitro assays to test nanomaterial toxcicity

The July 9, 2012 Nanowerk Spotlight article by Carl Walkey (of the University of Toronto) focuses on research by Dr. André Nel and his coworkers at the California NanoSystems Institute (CNSI) and the University of California Los Angeles (UCLA) on replacing small animal model testing for nanomaterial toxicity with in vitro assays,

Currently, small animal models are the ‘gold standard’ for nanomaterial toxicity testing. In a typical assessment, researchers introduce a nanomaterial into a series of laboratory animals, generally rats or mice, or the ‘workhorse’ of toxicity testing – zebrafish (see: “High content screening of zebrafish greatly speeds up nanoparticle hazard assessment”). They then examine where the material accumulates, whether it is excreted or retained in the animal, and the effect it has on tissue and organ function. A detailed understanding often requires dozens of animals and can take many months to complete for a single formulation. The current infrastructure and funding for animal testing is insufficient to support the evaluation of all nanomaterials currently in existence, let alone those that will be developed in the near future. This is creating a growing deficit in our understanding of nanomaterial toxicity, which fuels public apprehension towards nanotechnology.

Dr. André Nel and his coworkers at the California NanoSystems Institute (CNSI) and the University of California Los Angeles (UCLA) are taking a fundamentally different approach to nanomaterial toxicity testing.

Nel believes that, under the right circumstances, resource-intensive animal experiments can be replaced with comparatively simple in vitro assays.  The in vitro assays are not only less costly, but they can also be performed using high throughput (HT) techniques. By using an in vitro HT screening approach, comprehensive toxicological testing of a nanomaterial can be performed in a matter of days. Rapid information gathering will allow stakeholders to make rational, informed decisions about nanomaterials during all phases of the development process, from design to deployment.

I’ve excerpted a brief description of Nel’s approach,

Rather than using in vitro systems as direct substitutes for the in vivo case, Nel is using a mechanistic approach to connect cellular responses to more complex biological responses, attempting to employ mechanisms that are engaged at both levels and reflective of specific nanomaterial properties.

“You need to align what you test at a cellular level with what you want to know at the in vivo” says Nel. “If oxidative stress at the cellular level is a key initiating element, then by screening for this outcome in cells you more are likely to yield something more predictive of the in vivo outcome. We can do a lot of our mechanistic work at an implementation level that allows development of predictive screening assays.”

By measuring many relevant mechanistic responses, and integrating the results, Nel believes that the in vivo behavior of a nanomaterial can be accurately predicted, provided that enough thinking goes into the devising the systems biology approach to safety assessment.

According to Walkey’s article, this approach could result in a ‘reverse’ nanomaterial development process,

Nel’s approach will influence not only the way in which nanomaterial toxicity is assessed, but also the way in which nanomaterials are developed. Currently, nanomaterials are designed to meet the need of a particular application. Toxicity is then evaluated retrospectively. Formulations that exhibit unacceptable toxicity at that point may be abandoned after a significant investment in development. Because Nel’s approach generates toxicity information much faster than traditional techniques, it will be possible to integrate toxicity during the design of a new nanomaterial. The proactive characterization of nanomaterial toxicity will provide feedback during the design process, producing formulations that maximize efficacy and minimize risk.

This is a very interesting article (illustrated with images and peppered with accessibly explanations of the issues) for anyone following the ‘nanomaterial toxicology’ story.

Rats with robot brains

A robotic cerebellum has been implanted into a rat’s skull. From the Oct. 4, 2011 news item on Science Daily,

With new cutting-edge technology aimed at providing amputees with robotic limbs, a Tel Aviv University researcher has successfully implanted a robotic cerebellum into the skull of a rodent with brain damage, restoring its capacity for movement.

The cerebellum is responsible for co-ordinating movement, explains Prof. Matti Mintz of TAU’s [Tel Aviv University] Department of Psychology. When wired to the brain, his “robo-cerebellum” receives, interprets, and transmits sensory information from the brain stem, facilitating communication between the brain and the body. To test this robotic interface between body and brain, the researchers taught a brain-damaged rat to blink whenever they sounded a particular tone. The rat could only perform the behavior when its robotic cerebellum was functional.

This is the third item I’ve found in the last few weeks about computer chips being implanted in brains. I found the other two items in a discussion about extreme human enhancement on Slate.com (first mentioned in my Sept. 15, 2011 posting). One of the Brad Allenby [the other two discussants are Nicholas Agar and Kyle Munkittrick] entries (posted Sept. 16, 2011) featured these two references,

Experiments that began here at Arizona State University and have been continued at Duke and elsewhere have involved monkeys learning to move mechanical arms to which they are wirelessly connected as if they were part of themselves, using them effectively even when the arms (but not the monkey) are shifted up to MIT and elsewhere. More recently, monkeys with chips implanted in their brains [2008 according to the video on the website] at Duke University have kept a robot wirelessly connected to their chip running in Japan. Similar technologies are being explored to enable paraplegics and other injured people to interact with their environments and to communicate effectively, as well. The upshot is that “the body” is becoming more than just a spatial presence; rather, it becomes a designed extended cognitive network.

The projects are almost mirror images of each other. The rat can’t move without input from its robotic cerebellum while the monkeys control the robots’ movement with their thoughts. From the Oct. 3, 2011 news release on Eureka Alert,

According to the researcher, the chip is designed to mimic natural neuronal activity. “It’s a proof of the concept that we can record information from the brain, analyze it in a way similar to the biological network, and then return it to the brain,” says Prof. Mintz, who recently presented his research at the Strategies for Engineered Negligible Senescence meeting in Cambridge, UK.

In reading these items, I can’t help but remember that plastic surgery was a means of helping soldiers with horrendous wounds and it has now become part of the cosmetics industry. Given that history, it is possible to imagine (or to assume) that these brain ‘repairs’ could be used to augment or reshape our brains to increase intelligence, heighten senses, improve motor coordination, etc. In short. to accomplish very different goals than those originally set out.