Tag Archives: Barack Obama

Mopping up that oil spill with a nanocellulose sponge and a segue into Canadian oil and politics

Empa (Swiss Federal Laboratories for Materials Science and Technology or ,in German, Eidgenössische Materialprüfungs- und Forschungsanstalt) has announced the development of a nanocellulose sponge useful for cleaning up oil spills in a May 5, 2014 news item on Nanowerk (Note: A link has been removed),

A new, absorbable material from Empa wood research could be of assistance in future oil spill accidents: a chemically modified nanocellulose sponge. The light material absorbs the oil spill, remains floating on the surface and can then be recovered. The absorbent can be produced in an environmentally-friendly manner from recycled paper, wood or agricultural by-products (“Ultralightweight and Flexible Silylated Nanocellulose Sponges for the Selective Removal of Oil from Water”).

A May 2, 2014 Empa news release (also on EurekAlert*}, which originated the news item, includes a description of the potential for oil spills due to transport issues, Empa’s proposed clean-up technology, and a request for investment,

All industrial nations need large volumes of oil which is normally delivered by ocean-going tankers or via inland waterways to its destination. The most environmentally-friendly way of cleaning up nature after an oil spill accident is to absorb and recover the floating film of oil. The Empa researchers Tanja Zimmermann and Philippe Tingaut, in collaboration with Gilles Sèbe from the University of Bordeaux, have now succeeded in developing a highly absorbent material which separates the oil film from the water and can then be easily recovered, “silylated” nanocellulose sponge. In laboratory tests the sponges absorbed up to 50 times their own weight of mineral oil or engine oil. They kept their shape to such an extent that they could be removed with pincers from the water. The next step is to fine tune the sponges so that they can be used not only on a laboratory scale but also in real disasters. To this end, a partner from industry is currently seeked.

Here’s what the nanocellulose sponge looks like (oil was dyed red and the sponge has absorbed it from the water),

The sponge remains afloat and can be pulled out easily. The oil phase is selectively removed from the surface of water. Image: Empa

The sponge remains afloat and can be pulled out easily. The oil phase is selectively removed from the surface of water.
Image: Empa

The news release describes the substance, nanofibrillated cellulose (NFC), and its advantages,

Nanofibrillated Cellulose (NFC), the basic material for the sponges, is extracted from cellulose-containing materials like wood pulp, agricultural by products (such as straw) or waste materials (such as recycled paper) by adding water to them and pressing the aqueous pulp through several narrow nozzles at high pressure. This produces a suspension with gel-like properties containing long and interconnected cellulose nanofibres .

When the water from the gel is replaced with air by freeze-drying, a nanocellulose sponge is formed which absorbs both water and oil. This pristine material sinks in water and is thus not useful for the envisaged purpose. The Empa researchers have succeeded in modifying the chemical properties of the nanocellulose in just one process step by admixing a reactive alkoxysilane molecule in the gel before freeze-drying. The nanocellulose sponge loses its hydrophilic properties, is no longer suffused with water and only binds with oily substances.

In the laboratory the “silylated” nanocellulose sponge absorbed test substances like engine oil, silicone oil, ethanol, acetone or chloroform within seconds. Nanofibrillated cellulose sponge, therefore, reconciles several desirable properties: it is absorbent, floats reliably on water even when fully saturated and is biodegradable.

Here’s a link to and a citation for the paper,

Ultralightweight and Flexible Silylated Nanocellulose Sponges for the Selective Removal of Oil from Water by Zheng Zhang, Gilles Sèbe, Daniel Rentsch, Tanja Zimmermann, and Philippe Tingaut. Chem. Mater., 2014, 26 (8), pp 2659–2668 DOI: 10.1021/cm5004164 Publication Date (Web): April 10, 2014

Copyright © 2014 American Chemical Society

This article is behind a paywall.

I featured ‘nanocellulose and oil spills’ research at the University Wisconsin-Madison in a Feb. 26, 2014 post titled, Cleaning up oil* spills with cellulose nanofibril aerogels (Note: I corrected a typo in my headline hence the asterisk). I also have a Dec. 31, 2013 piece about a nanotechnology-enabled oil spill recovery technology project (Naimor) searching for funds via crowdfunding. Some major oil projects being considered in Canada and the lack of research on remediation are also mentioned in the post.

Segue Alert! As for the latest on Canada and its oil export situation, there’s a rather interesting May 2, 2014 Bloomberg.com article Canada Finds China Option No Easy Answer to Keystone Snub‘ by Edward Greenspon, Andrew Mayeda, Jeremy van Loon and Rebecca Penty describing two Canadian oil projects and offering a US perspective,

It was February 2012, three months since President Barack Obama had phoned the Canadian prime minister to say the Keystone XL pipeline designed to carry vast volumes of Canadian crude to American markets would be delayed.

Now Harper [Canadian Prime Minister Stephen Harper] found himself thousands of miles from Canada on the banks of the Pearl River promoting Plan B: a pipeline from Alberta’s landlocked oil sands to the Pacific Coast where it could be shipped in tankers to a place that would certainly have it — China. It was a country to which he had never warmed yet that served his current purposes. [China’s President at that time was Hu Jintao, 2002 – 2012; currently the President is Xi Jinping, 2013 – ]

The writers do a good job of describing a number of factors having an impact on one or both of the pipeline projects. However, no mention is made in the article that Harper is from the province of Alberta and represents that province’s Calgary Southwest riding. For those unfamiliar with Calgary, it is a city dominated by oil companies. I imagine Mr. Harper is under considerable pressure to resolve oil export and transport issues and I would expect they would prefer to resolve the US issues since many of those oil companies in Calgary have US headquarters.

Still, it seems simple, if the US is not interested as per the problems with the Keystone XL pipeline project, ship the oil to China via a pipeline through the province of British Columbia and onto a tanker. What the writers do not mention is yet another complicating factor, Trudeau, both Justin and, the deceased, Pierre.

As Prime Minister of Canada, Pierre Trudeau was unloved in Alberta, Harper’s home province, due to his energy policies and the formation of the National Energy Board. Harper appears, despite his denials, to have an antipathy towards Pierre Trudeau that goes beyond the political to the personal and it seems to extend beyond Pierre’s grave to his son, Justin. A March 21, 2014 article by Mark Kennedy for the National Post describes Harper’s response to Trudeau’s 2000 funeral this way,

Stephen Harper, then the 41-year-old president of the National Citizens Coalition (NCC), was a proud conservative who had spent three years as a Reform MP. He had entered politics in the mid-1980s, in part because of his disdain for how Pierre Trudeau’s “Just Society” had changed Canada.

So while others were celebrating Trudeau’s legacy, Harper hammered out a newspaper article eviscerating the former prime minister on everything from policy to personality.

Harper blasted Trudeau Sr. for creating “huge deficits, a mammoth national debt, high taxes, bloated bureaucracy, rising unemployment, record inflation, curtailed trade and declining competitiveness.”

On national unity, he wrote that Trudeau was a failure. “Only a bastardized version of his unity vision remains and his other policies have been rejected and repealed by even his own Liberal party.”

Trudeau had merely “embraced the fashionable causes of his time,” wrote Harper.

Getting personal, he took a jab at Trudeau over not joining the military during the Second World War: “He was also a member of the ‘greatest generation,’ the one that defeated the Nazis in war and resolutely stood down the Soviets in the decades that followed. In those battles however, the ones that truly defined his century, Mr. Trudeau took a pass.”

The article was published in the National Post Oct. 5, 2000 — two days after the funeral.

Kennedy’s article was occasioned by the campaign being led by Harper’;s Conservative party against the  leader (as of April 2013) of the Liberal Party, Justin Trudeau.

It’s hard to believe that Harper’s hesitation over China is solely due to human rights issues especially  since Harper has not been noted for consistent interest in those issues and, more particularly, since Prime Minister Pierre Trudeau was one of the first ‘Western’ leaders to visit communist China . Interestingly, Harper has been much more enthusiastic about the US than Pierre Trudeau who while addressing the Press Club in Washington, DC in March 1969, made this observation (from the Pierre Trudeau Wikiquote entry),

Living next to you [the US] is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.

On that note, I think Canada is always going to be sleeping with an elephant; the only question is, who’s the elephant now? In any event, perhaps Harper is more comfortable with the elephant he knows and that may explain why China’s offer to negotiate a free trade agreement has been left unanswered (this too was not noted in the Bloomberg article). The offer and lack of response were mentioned by Yuen Pau Woo, President and CEO of the Asia Pacific Foundation of Canada, who spoke at length about China, Canada, and their trade relations at a Jan. 31, 2014 MP breakfast (scroll down for video highlights of the Jan. 31, 2014 breakfast) held by Member of Parliament (MP) for Vancouver-Quadra, Joyce Murray.

Geopolitical tensions and Canadian sensitivities aside, I think Canadians in British Columbia (BC), at least, had best prepare for more oil being transported and the likelihood of spills. In fact, there are already more shipments according to a May 6, 2014 article by Larry Pynn for the Vancouver Sun,

B.C. municipalities work to prevent a disastrous accident as rail transport of oil skyrockets

The number of rail cars transporting crude oil and petroleum products through B.C. jumped almost 200 per cent last year, reinforcing the resolve of municipalities to prevent a disastrous accident similar to the derailment in Lac-Mégantic in Quebec last July [2013].

Transport Canada figures provided at The Vancouver Sun’s request show just under 3,400 oil and petroleum rail-car shipments in B.C. last year, compared with about 1,200 in 2012 and 50 in 2011.

The figures come a week after The Sun revealed that train derailments jumped 20 per cent to 110 incidents last year in B.C., the highest level in five years.

Between 2011 and 2012, there was an increase of 2400% (from 50 to 1200) of oil and petroleum rail-car shipments in BC. The almost 300% increase in shipments between 2012 and 2013 seems paltry in comparison.  Given the increase in shipments and the rise in the percentage of derailments, one assumes there’s an oil spill waiting to happen. Especially so, if the Canadian government manages to come to an agreement regarding the proposed pipeline for BC and frankly, I have concerns about the other pipeline too, since either will require more rail cars, trucks, and/or tankers for transport to major centres edging us all closer to a major oil spill.

All of this brings me back to Empa, its oil-absorbing nanocellulose sponges, and the researchers’ plea for investors and funds to further their research. I hope they and all the other researchers (e.g., Naimor) searching for ways to develop and bring their clean-up ideas to market find some support.

*EurekAlert link added May 7, 2014.

ETA May 8, 2014:  Some types of crude oil are more flammable than others according to a May 7, 2014 article by Lindsay Abrams for Salon.com (Note: Links have been removed),

Why oil-by-rail is an explosive disaster waiting to happen
A recent spate of fiery train accidents all have one thing in common: highly volatile cargo from North Dakota

In case the near continuous reports of fiery, deadly oil train accidents hasn’t been enough to convince you, Earth Island Journal is out with a startling investigative piece on North Dakota’s oil boom and the dire need for regulations governing that oil’s transport by rail.

The article is pegged to the train that derailed and exploded last summer in  [Lac-Mégantic] Quebec, killing 47 people, although it just as well could have been the story of the train that derailed and exploded in Alabama last November, the train that derailed and exploded in North Dakota last December, the train that derailed and exploded in Virginia last week or — let’s face it — any future accidents that many see as an inevitability.

The Bakken oil fields in North Dakota are producing over a million barrels of crude oil a day, more than 60 percent of which is shipped by rail. All that greenhouse gas-emitting fossil fuel is bad enough; that more oil spilled in rail accidents last year than the past 35 years combined is also no small thing. But the particular chemical composition of Bakken oil lends an extra weight to these concerns: according to the Pipeline and Hazardous Materials Safety Administration, it may be more flammable and explosive than traditional crude.

While Abrams’ piece is not focused on oil cleanups, it does raise some interesting questions about crude oil transport and whether or not the oil from Alberta might also be more than usually dangerous.

Nanotechnology and the US mega science project: BAM (Brain Activity Map) and more

The Brain Activity Map (BAM) project received budgetary approval as of this morning, Apr. 2, 2013 (I first mentioned BAM in my Mar. 4, 2013 posting when approval seemed imminent). From the news item, Obama Announces Huge Brain-Mapping Project, written by Stephanie Pappas for Yahoo News (Note: Links have been removed),

 President Barack Obama announced a new research initiative this morning (April 2) to map the human brain, a project that will launch with $100 million in funding in 2014.

The Brain Activity Map (BAM) project, as it is called, has been in the planning stages for some time. In the June 2012 issue of the journal Neuron, six scientists outlined broad proposals for developing non-invasive sensors and methods to experiment on single cells in neural networks. This February, President Obama made a vague reference to the project in his State of the Union address, mentioning that it could “unlock the answers to Alzheimer’s.”

In March, the project’s visionaries outlined their final goals in the journal Science. They call for an extended effort, lasting several years, to develop tools for monitoring up to a million neurons at a time. The end goal is to understand how brain networks function.

“It could enable neuroscience to really get to the nitty-gritty of brain circuits, which is the piece that’s been missing from the puzzle,” Rafael Yuste, the co-director of the Kavli Institute for Brain Circuits at Columbia University, who is part of the group spearheading the project, told LiveScience in March. “The reason it’s been missing is because we haven’t had the techniques, the tools.” [Inside the Brain: A Journey Through Time]

Not all neuroscientists support the project, however, with some arguing that it lacks clear goals and may cannibalize funds for other brain research.

….

I believe the $100M mentioned for 2014 would one installment in a series totaling up to $1B or more. In any event, it seems like a timely moment to comment on the communications campaign that has been waged on behalf of the BAM. It reminds me a little of the campaign for graphene, which was waged in the build up to the decision as to which two projects (in a field of six semi-finalists, then narrowed to a field of four finalists) should receive a FET (European Union’s Future and Emerging Technology) 1 billion euro research prize each. It seemed to me even a year or so before the decision that graphene’s win was a foregone conclusion but the organizers left nothing to chance and were relentless in their pursuit of attention and media coverage in the buildup to the final decision.

The most recent salvo in the BAM campaign was an attempt to link it with nanotechnology. A shrewd move given that the US has spent well over $1B since the US National Nanotechnology Initiative (NNI) was first approved in 2000. Linking the two projects means the NNI can lend a little authority to the new project (subtext: we’ve supported a mega-project before and that was successful) while the new project BAM can imbue the ageing NNI with some excitement.

Here’s more about nanotechnology and BAM from a Mar. 27, 2013 Spotlight article by Michael Berger on Nanowerk,

A comprehensive understanding of the brain remains an elusive, distant frontier. To arrive at a general theory of brain function would be an historic event, comparable to inferring quantum theory from huge sets of complex spectra and inferring evolutionary theory from vast biological field work. You might have heard about the proposed Brain Activity Map – a project that, like the Human Genome Project, will tap the hive mind of experts to make headway in the understanding of the field. Engineers and nanotechnologists will be needed to help build ever smaller devices for measuring the activity of individual neurons and, later, to control how those neurons function. Computer scientists will be called upon to develop methods for storing and analyzing the vast quantities of imaging and physiological data, and for creating virtual models for studying brain function. Neuroscientists will provide critical biological expertise to guide the research and interpret the results.

Berger goes on to highlight some of the ways nanotechnology-enabled devices could contribute to the effort. He draws heavily on a study published Mar. 20, 2013 online in ACS (American Chemical Society)Nano. Shockingly, the article is open access. Given that this is the first time I’ve come across an open access article in any of the American Chemical Society’s journals, I suspect that there was payment of some kind involved to make this information freely available. (The practice of allowing researchers to pay more in order to guarantee open access to their research in journals that also have articles behind paywalls seems to be in the process of becoming more common.)

Here’s a citation and a link to the article about nanotechnology and BAM,

Nanotools for Neuroscience and Brain Activity Mapping by A. Paul Alivisatos, Anne M. Andrews, Edward S. Boyden, Miyoung Chun, George M. Church, Karl Deisseroth, John P. Donoghue, Scott E. Fraser, Jennifer Lippincott-Schwartz, Loren L. Looger, Sotiris Masmanidis, Paul L. McEuen, Arto V. Nurmikko, Hongkun Park, Darcy S. Peterka, Clay Reid, Michael L. Roukes, Axel Scherer, Mark Schnitzer, Terrence J. Sejnowski, Kenneth L. Shepard, Doris Tsao, Gina Turrigiano, Paul S. Weiss, Chris Xu, Rafael Yuste, and Xiaowei Zhuang. ACS Nano, 2013, 7 (3), pp 1850–1866 DOI: 10.1021/nn4012847 Publication Date (Web): March 20, 2013
Copyright © 2013 American Chemical Society

As these things go, it’s a readable article for people without a neuroscience education provided they don’t mind feeling a little confused from time to time. From Nanotools for Neuroscience and Brain Activity Mapping (Note: Footnotes and links removed),

The Brain Activity Mapping (BAM) Project (…) has three goals in terms of building tools for neuroscience capable of (…) measuring the activity of large sets of neurons in complex brain circuits, (…) computationally analyzing and modeling these brain circuits, and (…) testing these models by manipulating the activities of chosen sets of neurons in these brain circuits.

As described below, many different approaches can, and likely will, be taken to achieve these goals as neural circuits of increasing size and complexity are studied and probed.

The BAM project will focus both on dynamic voltage activity and on chemical neurotransmission. With an estimated 85 billion neurons, 100 trillion synapses, and 100 chemical neurotransmitters in the human brain,(…) this is a daunting task. Thus, the BAM project will start with model organisms, neural circuits (vide infra), and small subsets of specific neural circuits in humans.

Among the approaches that show promise for the required dynamic, parallel measurements are optical and electro-optical methods that can be used to sense neural cell activity such as Ca2+,(7) voltage,(…) and (already some) neurotransmitters;(…) electrophysiological approaches that sense voltages and some electrochemically active neurotransmitters;(…) next-generation photonics-based probes with multifunctional capabilities;(18) synthetic biology approaches for recording histories of function;(…) and nanoelectronic measurements of voltage and local brain chemistry.(…) We anticipate that tools developed will also be applied to glia and more broadly to nanoscale and microscale monitoring of metabolic processes.

Entirely new tools will ultimately be required both to study neurons and neural circuits with minimal perturbation and to study the human brain. These tools might include “smart”, active nanoscale devices embedded within the brain that report on neural circuit activity wirelessly and/or entirely new modalities of remote sensing of neural circuit dynamics from outside the body. Remarkable advances in nanoscience and nanotechnology thus have key roles to play in transduction, reporting, power, and communications.

One of the ultimate goals of the BAM project is that the knowledge acquired and tools developed will prove useful in the intervention and treatment of a wide variety of diseases of the brain, including depression, epilepsy, Parkinson’s, schizophrenia, and others. We note that tens of thousands of patients have already been treated with invasive (i.e., through the skull) treatments. [emphases mine] While we hope to reduce the need for such measures, greatly improved and more robust interfaces to the brain would impact effectiveness and longevity where such treatments remain necessary.

Perhaps not so coincidentally, there was this Mar. 29, 2013 news item on Nanowerk,

Some human cells forget to empty their trash bins, and when the garbage piles up, it can lead to Parkinson’s disease and other genetic and age-related disorders. Scientists don’t yet understand why this happens, and Rice University engineering researcher Laura Segatori is hoping to change that, thanks to a prestigious five-year CAREER Award from the National Science Foundation (NSF).

Segatori, Rice’s T.N. Law Assistant Professor of Chemical and Biomolecular Engineering and assistant professor of bioengineering and of biochemistry and cell biology, will use her CAREER grant to create a toolkit for probing the workings of the cellular processes that lead to accumulation of waste material and development of diseases, such as Parkinson’s and lysosomal storage disorders. Each tool in the kit will be a nanoparticle — a speck of matter about the size of a virus — with a specific shape, size and charge.  [emphases mine] By tailoring each of these properties, Segatori’s team will create a series of specialized probes that can undercover the workings of a cellular process called autophagy.

“Eventually, once we understand how to design a nanoparticle to activate autophagy, we will use it as a tool to learn more about the autophagic process itself because there are still many question marks in biology regarding how this pathway works,” Segatori said. “It’s not completely clear how it is regulated. It seems that excessive autophagy may activate cell death, but it’s not yet clear. In short, we are looking for more than therapeutic applications. We are also hoping to use these nanoparticles as tools to study the basic science of autophagy.”

There is no direct reference to BAM but there are some intriguing correspondences.

Finally, there is no mention of nanotechnology in this radio broadcast/podcast and transcript but it does provide more information about BAM (for many folks this was first time they’d heard about the project) and the hopes and concerns this project raises while linking it to the Human Genome Project. From the Mar. 31, 2013 posting of a transcript and radio (Kera News; a National Public Radio station) podcast titled, Somewhere Over the Rainbow: The Journey to Map the Human Brain,

During the State of the Union, President Obama said the nation is about to embark on an ambitious project: to examine the human brain and create a road map to the trillions of connections that make it work.

“Every dollar we invested to map the human genome returned $140 to our economy — every dollar,” the president said. “Today, our scientists are mapping the human brain to unlock the answers to Alzheimer’s.”

Details of the project have slowly been leaking out: $3 billion, 10 years of research and hundreds of scientists. The National Institutes of Health is calling it the Brain Activity Map.

Obama isn’t the first to tout the benefits of a huge government science project. But can these projects really deliver? And what is mapping the human brain really going to get us?

Whether one wants to call it a public relations campaign or a marketing campaign is irrelevant. Science does not take place in an environment where data and projects are considered dispassionately. Enormous amounts of money are spent to sway public opinion and policymakers’ decisions.

ETA Ap. 3, 2013: Here are more stories about BAM and the announcement:

BRAIN Initiative Launched to Unlock Mysteries of Human Mind

Obama’s BRAIN Only 1/13 The Size Of Europe’s

BRAIN Initiative Builds on Efforts of Leading Neuroscientists and Nanotechnologists

Israel’s Prime Minister to offer US President Obama two nanoscale Declarations of Independence

President Barack Obama will receive his present of a nanoscale document containing the US and Israeli Declarations of Independence in Israel, according to a Mar. 19, 2013 news item by Kevin Hattori on phys.org,

In a ceremony to be held on Wednesday, March 20, [2013] in Jerusalem, Israeli Prime Minister Benjamin Netanyahu will present U.S. President Barack Obama with nano-sized inscribed replicas of the Declarations of Independence of the United States and the State of Israel. Created by scientists at the Technion’s Russell Berrie Nanotechnology Institute (RBNI), at the request of PM Netanyahu, the Declarations appear side-by-side on a gold-coated silicon chip smaller than a pinhead. The juxtaposition symbolizes the shared values of both countries.

Hattori’s Mar. 18, 2013 news release for the American Technion Society (ATS), and the origin for the phys.org news item, provides this technical detail,

The area of the etched inscriptions is 0.04 square millimeters, and 0.00002 millimeters (20 nanometers) deep. The chip is affixed to a Jerusalem Stone dating to the Second Temple Period (1st Century BCE to 1st Century CE).

“This unique application of cutting-edge technology is just one example of Israel’s remarkable leadership in high-tech,” said Technion President Peretz Lavie.

The text was written using a focused ion beam (FIB) generator that shot tiny particles called Gallium ions onto a gold surface covering a base layer of silicon.  In a process that can be likened to digging a hole in the earth using a water jet, the ion beam etched the surface of the gold layer, making the underlying silicon layer visible.

The original image was translated into etching instructions using a special program developed for this purpose by Dr. Ohad Zohar, who conducted his Ph.D. under Prof. Uri Sivan of the Technion Physics Department. The engraving was done by Dr. Tzipi Cohen-Hyams, head of the RBNI Focused Ion Beam Lab. Other members of the team were Prof. Wayne D. Kaplan, Prof. Nir Tessler, Mr. Yaacov Shneider, Dr. Orna Ternyak, and Ms. Svetlana Yoffis.  The work was conducted in the Technion’s Sara and Moshe Zisapel Nanoelectronics Center and the Wolfson Microelectronics Research and Teaching Center.

Here’s what the chip looks like,

Chip containing U.S. and Israeli Declarations of Independence, on Jerusalem stone (downloaded from http://www.ats.org/site/News2?page=NewsArticle&id=7807&news_iv_ctrl=1161]

Chip containing U.S. and Israeli Declarations of Independence, on Jerusalem stone (downloaded from http://www.ats.org/site/News2?page=NewsArticle&id=7807&news_iv_ctrl=1161]

There’s also this video describing how the work was done,

Brain-to-brain communication, organic computers, and BAM (brain activity map), the connectome

Miguel Nicolelis, a professor at Duke University, has been making international headlines lately with two brain projects. The first one about implanting a brain chip that allows rats to perceive infrared light was mentioned in my Feb. 15, 2013 posting. The latest project is a brain-to-brain (rats) communication project as per a Feb. 28, 2013 news release on *EurekAlert,

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

“Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. “In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'”

Ben Schiller in a Mar. 1, 2013 article for Fast Company describes both the latest experiment and the work leading up to it,

First, two rats were trained to press a lever when a light went on in their cage. Press the right lever, and they would get a reward–a sip of water. The animals were then split in two: one cage had a lever with a light, while another had a lever without a light. When the first rat pressed the lever, the researchers sent electrical activity from its brain to the second rat. It pressed the right lever 70% of the time (more than half).

In another experiment, the rats seemed to collaborate. When the second rat didn’t push the right lever, the first rat was denied a drink. That seemed to encourage the first to improve its signals, raising the second rat’s lever-pushing success rate.

Finally, to show that brain-communication would work at a distance, the researchers put one rat in an cage in North Carolina, and another in Natal, Brazil. Despite noise on the Internet connection, the brain-link worked just as well–the rate at which the second rat pushed the lever was similar to the experiment conducted solely in the U.S.

The Duke University Feb. 28, 2013 news release, the origin for the news release on EurekAlert, provides more specific details about the experiments and the rats’ training,

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the “encoder” animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the “decoder” animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a “behavioral collaboration” between the pair of rats.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” Nicolelis said. “The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Will Oremus in his Feb. 28, 2013 article for Slate seems a little less buoyant about the implications of this work,

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

That sounds far-fetched. But Nicolelis’ lab is developing quite the track record of “taking science fiction and turning it into science,” says Ron Frostig, a neurobiologist at UC-Irvine who was not involved in the rat study. “He’s the most imaginative neuroscientist right now.” (Frostig made it clear he meant this as a complement, though skeptics might interpret the word less charitably.)

The most extensive coverage I’ve given Nicolelis and his work (including the Walk Again project) was in a March 16, 2012 post titled, Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football), although there are other mentions including in this Oct. 6, 2011 posting titled, Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control.  By the way, Nicolelis hopes to have a paraplegic individual (using technology Nicolelis is developing for the Walk Again project) kick the opening soccer/football to the 2014 World Cup games in Brazil.

While there’s much excitement about Nicolelis and his work, there are other ‘brain’ projects being developed in the US including the Brain Activity Map (BAM), which James Lewis notes in his Mar. 1, 2013 posting on the Foresight Institute blog,

A proposal alluded to by President Obama in his State of the Union address [Feb. 2013] to construct a dynamic “functional connectome” Brain Activity Map (BAM) would leverage current progress in neuroscience, synthetic biology, and nanotechnology to develop a map of each firing of every neuron in the human brain—a hundred billion neurons sampled on millisecond time scales. Although not the intended goal of this effort, a project on this scale, if it is funded, should also indirectly advance efforts to develop artificial intelligence and atomically precise manufacturing.

As Lewis notes in his posting, there’s an excellent description of BAM and other brain projects, as well as a discussion about how these ideas are linked (not necessarily by individuals but by the overall direction of work being done in many labs and in many countries across the globe) in Robert Blum’s Feb. (??), 2013 posting titled, BAM: Brain Activity Map Every Spike from Every Neuron, on his eponymous blog. Blum also offers an extensive set of links to the reports and stories about BAM. From Blum’s posting,

The essence of the BAM proposal is to create the technology over the coming decade
to be able to record every spike from every neuron in the brain of a behaving organism.
While this notion seems insanely ambitious, coming from a group of top investigators,
the paper deserves scrutiny. At minimum it shows what might be achieved in the future
by the combination of nanotechnology and neuroscience.

In 2013, as I write this, two European Flagship projects have just received funding for
one billion euro each (1.3 billion dollars each). The Human Brain Project is
an outgrowth of the Blue Brain Project, directed by Prof. Henry Markram
in Lausanne, which seeks to create a detailed simulation of the human brain.
The Graphene Flagship, based in Sweden, will explore uses of graphene for,
among others, creation of nanotech-based supercomputers. The potential synergy
between these projects is a source of great optimism.

The goal of the BAM Project is to elaborate the functional connectome
of a live organism: that is, not only the static (axo-dendritic) connections
but how they function in real-time as thinking and action unfold.

The European Flagship Human Brain Project will create the computational
capability to simulate large, realistic neural networks. But to compare the model
with reality, a real-time, functional, brain-wide connectome must also be created.
Nanotech and neuroscience are mature enough to justify funding this proposal.

I highly recommend reading Blum’s technical description of neural spikes as understanding that concept or any other in his post doesn’t require an advanced degree. Note: Blum holds a number of degrees and diplomas including an MD (neuroscience) from the University of California at San Francisco and a PhD in computer science and biostatistics from California’s Stanford University.

The Human Brain Project has been mentioned here previously. The  most recent mention is in a Jan. 28, 2013 posting about its newly gained status as one of two European Flagship initiatives (the other is the Graphene initiative) each meriting one billion euros of research funding over 10 years. Today, however, is the first time I’ve encountered the BAM project and I’m fascinated. Luckily, John Markoff’s Feb. 17, 2013 article for The New York Times provides some insight into this US initiative (Note: I have removed some links),

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

What I find particularly interesting is the reference back to the human genome project, which may explain why BAM is also referred to as a ‘connectome’.

ETA Mar.6.13: I have found a Human Connectome Project Mar. 6, 2013 news release on EurekAlert, which leaves me confused. This does not seem to be related to BAM, although the articles about BAM did reference a ‘connectome’. At this point, I’m guessing that BAM and the ‘Human Connectome Project’ are two related but different projects and the reference to a ‘connectome’ in the BAM material is meant generically.  I previously mentioned the Human Connectome Project panel discussion held at the AAAS (American Association for the Advancement of Science) 2013 meeting in my Feb. 7, 2013 posting.

* Corrected EurkAlert to EurekAlert on June 14, 2013.

Nano success in NY State breeds competition for credit as US election nears

It’s been a while since I’ve posted any items about nanotechnology efforts in New York state. In general, I’ve found the efforts at communication and public engagement quite impressive as they’ve been important to the overall strategy (I suspect some credit should be given to serendipity) of making New York state a center for nanotechnology research, training, and industry.

Yesterday, May 8, 2012, on the occasion of a visit from President Barack Obama there was a bit of a kerfuffle regarding who should get the credit for New York state’s leadership, Democrats or Republicans. Since this is an election year in the US, this was perhaps predictable.

From the May 8, 2012 article by Tom Precious for BuffaloNews.com,

In Albany [New York state capital] today, it’s “who is the greater visionary” time.

On Monday, an aide to Gov. Andrew Cuomo told an Albany radio station it was first the idea of the governor’s father, former Gov. Mario Cuomo, to pump state money into what has become a center with more than $13 billion of private investment and that today is undergoing a major new expansion partnering with the likes of IBM and Intel.

Hours later on Monday, Assembly Speaker Sheldon Silver, a Manhattan Democrat, noted that he has led the charge for two decades to support the university center that is helping to make Albany a high-tech center for nanoscience engineering, and now, chip manufacturing.

So today, a couple hours before Air Force One was set to leave Washington for Albany for the Obama stop, it was “take credit time” for the Republicans.

State Republican Party Chairman Ed Cox said the nanoscale center in Albany was “developed about 10 years ago by a Republican governor based on Republican principles.” In a conference call with reporters, Cox said the Albany facility was a result of the “leadership” of former Gov. George E. Pataki.

“Arguing over credit is something for small-minded people who get bogged down in the political headlines of the day. What I tried to do was put in place policies that speak for themselves,” Pataki said in the phone call this morning. He then pointed, for a second time in the call, to the timeline of its important events on the nanoscale facility’s web site that begins in 2001 — when Pataki was in office.

I think there’s enough credit to go around, although perhaps not during an election year. In any event, I think their initiative has been quite impressive.

* In headline ‘breed’ corrected to ‘breeds’ on Oct. 11, 2013.

Egyptian scientists win cash prize for innovation: a nano test for Hepatitis C

A team of Egyptian scientists won the $10,000 prize for 3rd place at Intel’s 7th Annual Global Challenge held at the University of California at Berkeley. The team,  Dr Hassan M E Azzazy, Tamer M Samir, Sherif Mohamed Shawky, Mai M H Mansour and Ahmed H Tolba, won both an Intel Global Challenge Prize and 1st place in the Arab Technology Business Plan Competition for its Hepatitis C test. From the Nov. 16, 2011 article by Georgina Enzer for ITP.net,

The team developed a Hepatitis C test which uses gold nanoparticles to detect Hepatitis C in less than an hour, and at one-tenth the cost of current commercial tests. The team won a $10,000 prize for their innovation.

The Intel Global Challenge at UC Berkeley encourages student entrepreneurs and rewards innovative ideas that have the potential to have a positive impact on society.

The Egypt team, NanoDiagX, led by Dr Hassan M E Azzazy, Tamer M Samir, Sherif Mohamed Shawky, Mai M H Mansour and Ahmed H Tolba won first place in the 7th Arab Technology Business Plan Competition 2011, organised by the Arab Science and Technology Foundation (ASTF) in partnership with Intel Corporation. The regional competition, which was also in partnership with the United Nations Industrial Development Organisation (UNIDO), features 50 projects from 50 Arab entrepreneurs across 15 countries.

U.S. President Barak Obama has recognized the team’s achievements, from the Nov. 19, 2011 news item on Egypt.com

U.S. President Barack Obama honored the Egyptian team that won third prize of Intel’s Global Leadership after discovering a new cure for hepatitis C virus with nanotechnology.

The Egyptian team, Nano-Diagx, is the first Arab team to win the competition, organized by the Arab Organization for Science and Technology in cooperation with Intel and UNIDO.

Azazi [Dr. Hassan Azazi] said his team s most important advantage is the spirit of teamwork, which is uncommon in the culture of the Arab region.

He added the project used nanotechnology and gold to develop a cure for HIV hepatitis, which affects more than 200 million people worldwide and more than 100,000 Egyptians annually, particularly in cancer cases and cirrhosis of the liver.

It should be mentioned 28 technological projects participated in Intel’s World Challenge this year. The projects are all from 22 countries; Egypt, Saudi Arabia, Lebanon, Thailand, America, Portugal, Russia, Turkey, India, Uruguay, China, Japan, Brazil, Taiwan, Philippines, Turkey, Argentina, Chile, Poland, Denmark and Israel.

I came to the conclusion that the team was successful in two competitions, Intel’s World Challenge which attracted 28 entries and the Arab Technology Business Plan Competition which attracted 50 entries even though it’s not stated explicitly in the materials I have read.

Congratulations to the Egyptian team’s accomplishments which become even more noteworthy when you realize the working conditions for many scientists in Egypt. In a Feb. 4, 2011 posting, I excerpted parts of an interview in Nature magazine about Egypt and science,

The article goes on to recount a Q & A (Questions and Answers) session with Michael Harms of the German Academic Exchange Service offering his view from Cairo,

How would you describe Egyptian science?

There are many problems. Universities are critically under-funded and academic salaries are so low that most scientists need second jobs to be able to make a living. [emphasis mine] Tourist guides earn more money than most scientists. You just can’t expect world-class research under these circumstances. Also, Egypt has no large research facilities, such as particle accelerators. Some 750,000 students graduate each year and flood the labour market, yet few find suitable jobs – one reason for the current wave of protests.

If you are interested, here’s the article, ‘Deep fury’ of Egyptian scientists.

2011 Scientific integrity processes: the US and Canada

Given recent scientific misconduct  (July is science scandal month [July 25 2011] post at The Prodigal Academic blog) and a very slow news month this August,  I thought I’d take a look at scientific integrity in the US and in Canada.

First, here’s a little history. March 9, 2009 US President Barack Obama issued a Presidential Memorandum on Scientific Integrity (excerpted),

Science and the scientific process must inform and guide decisions of my Administration on a wide range of issues, including improvement of public health, protection of the environment, increased efficiency in the use of energy and other resources, mitigation of the threat of climate change, and protection of national security.

The public must be able to trust the science and scientific process informing public policy decisions.  Political officials should not suppress or alter scientific or technological findings and conclusions.  If scientific and technological information is developed and used by the Federal Government, it should ordinarily be made available to the public.  To the extent permitted by law, there should be transparency in the preparation, identification, and use of scientific and technological information in policymaking.  The selection of scientists and technology professionals for positions in the executive branch should be based on their scientific and technological knowledge, credentials, experience, and integrity.

December 17, 2010 John P. Holdren, Assistant to the President for Science and Technology and Director of the Office of Science and Technology Policy,  issued his own memorandum requesting compliance with the President’s order (from the Dec. 17, 2010 posting on The White House blog),

Today, in response to the President’s request, I am issuing a Memorandum to the Heads of Departments and Agencies that provides further guidance to Executive Branch leaders as they implement Administration policies on scientific integrity. The new memorandum describes the minimum standards expected as departments and agencies craft scientific integrity rules appropriate for their particular missions and cultures, including a clear prohibition on political interference in scientific processes and expanded assurances of transparency. It requires that department and agency heads report to me on their progress toward completing those rules within 120 days.

Here’s my edited version (I removed fluff, i.e. material along these lines: scientific integrity is of utmost importance …) of the list Holdren provided,

Foundations

  1. Ensure a culture of scientific integrity.
  2. Strengthen the actual and perceived credibility of Government research. Of particular importance are (a) ensuring that selection of candidates for scientific positions in executive branch is based primarily on their scientific and technological knowledge, credentials, experience, and integrity, (b) ensuring that data and research used to support policy decisions undergo independent peer review by qualified experts where feasibly and appropriate, and consistent with law, (c) setting clear standards governing conflicts, and (d) adopting appropriate whistleblower protections.
  3. Facilitate the free flow of scientific and technological information, consistent with privacy and classification standards. … Consistent with the Administration’s Open Government Initiative, agencies should expand and promote access to scientific and technological information by making it available  online in open formats. Where appropriate, this should include data and models underlying regulatory proposals and policy decisions.
  4. Establish principles for conveying scientific and technological information to the public. … Agencies should communicate scientific and technological findings by including a clear explication of underlying assumptions; accurate contextualization of uncertainties; and a description of the probabilities associated with optimistic and pessimistic projections, including best-case and worst-case scenarios where appropriate.

Public communication

  1. In response to media interview requests about the scientific and technological dimensions of their work, agencies will offer articulate and knowledgeable spokespersons who can, in an objective and nonpartisan fashion, describe and explain these dimension to the media and the American people.
  2. Federal scientists may speak to the media and the public about scientific and technological matters based on their official work, with appropriate coordination with their immediate supervisor and their public affairs office. In no circumstance may public affairs officers ask or direct Federal scientists to alter scientific findings.
  3. Mechanisms are in place to resolve disputes that arise from decisions to proceed or not to proceed  with proposed interviews or other public information-related activities. …

(The sections on Federal Advisory Committees and professional development were less relevant to this posting, so I haven’t included them here.)

It seems to have taken the agencies a little longer than the 120 day deadline that John Holdren gave them but all (or many of the agencies) have complied according to an August 15, 2011 posting by David J. Hanson on the Chemical & Engineering News (C&EN) website,

OSTP director John P. Holdren issued the call for the policies on May 5 in response to a 2009 Presidential memorandum (C&EN, Jan. 10, page 28). [emphasis mine] The memorandum was a response to concerns about politicization of science during the George W. Bush Administration.

The submitted integrity plans include 14 draft policies and five final policies. The final policies are from the National Aeronautics & Space Administration, the Director of National Intelligences for the intelligence agencies, and the Departments of Commerce, Justice, and Interior.

Draft integrity policies are in hand from the Departments of Agriculture, Defense, Education, Energy, Homeland Security, Health & Human Services, Labor, and Transportation and from the National Oceanic & Atmospheric Administration, National Science Foundation, Environmental Protection Agency, Social Security Administrations, OSTP, and Veterans Administration.

The drafts still under review are from the Department of State, the Agency for International Development, and the National Institute of Standards & Technology.

The dates in this posting don’t match up with what I’ve found but it’s possible that the original deadline was moved to better accommodate the various reporting agencies. In any event, David Bruggeman at his Pasco Phronesis blog has commented on this initiative in a number of posts including this August 10, 2011 posting,

… I’m happy to see something out there at all, given the paltry public response from most of the government.  Comments are open until September 6.Regrettably, the EPA [Environmental Protection Agency] policy falls into a trap that is all too common.  The support of scientific integrity is all too often narrowly assumed to simply mean that agency (or agency-funded) scientists need to behave, and there will be consequences for demonstrated bad behavior.

But there is a serious problem of interference from non-scientific agency staff that would go beyond reasonable needs for crafting the public message.

David goes on to discuss a lack of clarity in this policy and in the Dept. of the Interior’s policy.

His August 11, 2011 posting notes the OSTP claims that 19 departments/agencies have submitted draft or final policies,

… Not only does the OSTP blog post not include draft or finalized policies submitted to their office, it fails to mention any timeframe for making them publicly available.  Even more concerning, there is no mention of those policies that have been publicly released.  That is, regrettably, consistent with past practice. While the progress report notes that OSTP will create a policy for its own activities, and that OSTP is working with the Office of Management and Budget on a policy for all of the Executive Office of the President, there’s no discussion of a government-wide policy.

In the last one of his recent series, the August 12, 2011 posting focuses on a Dept. of Commerce memo (Note: The US Dept. of Commerce includes the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology),

“This memorandum confirms that DAO 219-1 [a Commerce Department order concerning scientific communications] allows scientists to engage in oral fundamental research communications (based on their official work) with the media and the public without notification or prior approval to their supervisor or to the Office of Public Affairs. [emphasis David Bruggeman] Electronic communications with the media related to fundamental research that are the equivalent of a dialogue are considered to be oral communications; thus, prior approval is not required for  scientist to engage in online discussions or email with the media about fundamental research, subject to restrictions on protected nonpublic information as set forth in 219-1.”

I find the exercise rather interesting especially in light of Margaret Munro’s July 27, 2011 article, Feds silence scientist over salmon study, for Postmedia,

Top bureaucrats in Ottawa have muzzled a leading fisheries scientist whose discovery could help explain why salmon stocks have been crashing off Canada’s West Coast, according to documents obtained by Postmedia News.

The documents show the Privy Council Office, which supports the Prime Minister’s Office, stopped Kristi Miller from talking about one of the most significant discoveries to come out of a federal fisheries lab in years.

Science, one of the world’s top research journals, published Miller’s findings in January. The journal considered the work so significant it notified “over 7,400” journalists worldwide about Miller’s “Suffering Salmon” study.

The documents show major media outlets were soon lining up to speak with Miller, but the Privy Council Office said no to the interviews.

In a Twitter conversation with me, David Bruggeman did note that the Science paywall also acts as a kind of muzzle.

I was originally going to end the posting with that last paragraph but I made a discovery, quite by accident. Canada’s Tri-Agency Funding Councils opened a consultation with stakeholders on Ethics and Integrity for Institutions, Applicants, and Award Holders on August 15, 2011 which will run until September 30, 2011. (This differs somewhat from the US exercise which is solely focussed on science as practiced in various government agencies.  The equivalent in Canada would be if Stephen Harper requested scientific integrity guidelines from the Ministries of Environment, Natural Resources, Health, Industry, etc.) From the NSERC Ethics and Integrity Guidelines page,

Upcoming Consultation on the Draft Tri-Agency Framework: Responsible Conduct of Research

The Canadian Institutes of Health Research (CIHR), the Social Sciences and Humanities Research Council of Canada (SSHRC), and NSERC (the tri-agencies) continue to work on improving their policy framework for research and scholarly integrity, and financial accountability. From August 15 to September 30, 2011, the three agencies are consulting with a wide range of stakeholders in the research community on the draft consultation document, Tri-Agency Framework: Responsible Conduct of Research.

I found the answers to these two questions in the FAQs particularly interesting,

  • What are some of the new elements in this draft Framework?

The draft Framework introduces new elements, including the following:

A strengthened Tri-Agency Research Integrity Policy
The draft Framework includes a strengthened Tri-Agency Research Integrity Policy that clarifies the responsibilities of the researcher.

‘Umbrella’ approach to RCR
The draft Framework provides an overview of all applicable research policies, including those related to the ethical conduct of research involving humans and financial management, as well as research integrity. It also clarifies the roles and responsibilities of researchers, institutions and Agencies in responding to all types of alleged breaches of Agency policies, for example, misuse of funds, unethical conduct of research involving human participants or plagiarism.

A definition of a policy breach
The draft Framework clarifies what constitutes a breach of an Agency policy.

Disclosure
The draft Framework requires researchers to disclose, at the time of application, whether they have ever been found to have breached any Canadian or other research policies, regardless of the source of funds that supported the research and whether or not the findings originated in Canada or abroad.

The Agencies are currently seeking advice from privacy experts on the scope of the information to be requested.

Institutional Investigations
The Agencies currently specify that institutional investigation committee membership must exclude those in conflict of interest. The draft Framework stipulates also that an investigation committee must include at least one member external to the Institution, and that an Agency may conduct its own review or compliance audit, or require the Institution to conduct an independent review/audit.

Timeliness of investigation
Currently, it is up to institutions to set timelines for investigations. The draft Framework states that inquiry and investigation reports are to be submitted to the relevant Agency within two and seven months, respectively, following receipt of the allegation by the institution.

  • Who is being consulted?

The Agencies have targeted their consultation to individual researchers, post-secondary institutions and other eligible organizations that apply for and receive Agency funding.

As far as I can tell, there is no mention of ethical issues where the government has interfered in the dissemination of scientific information; it seems there is an assumption that almost all ethical misbehaviour is on that part of the individual researcher or a problem with an institution following policy. There is one section devoted breaches by institutions (all two paragraphs of it),

5 Breaches of Agency Policies by Institutions

In accordance with the MOU signed by the Agencies and each Institution, the Agencies require that each Institution complies with Agency policies as a condition of eligibility to apply for and administer Agency funds.

The process followed by the Agencies to address an allegation of a breach of an Agency policy by an Institution, and the recourse that the Agencies may exercise, commensurate with the severity of a confirmed breach, are outlined in the MOU.

My criticism of this is similar to the one that David Bruggeman made of the US policies in that the focus is primarily on the individual.

Innovation discussion in Canada lacks imagination

Today, Feb. 18, 2011, is the last day you have to make a submission to the federal government of Canada’s Review of Federal Support to Research and Development.

By the way, the  expert panel appointed and tasked with carrying out this consultation consists of:

Mr. Thomas Jenkins – Chair
Dr. Bev Dahlby
Dr. Arvind Gupta
Ms. Monique F. Leroux
Dr. David Naylor
Mrs. Nobina Robinson

They represent a mix of industry and academic representatives; you can read more about them here. You will have to click for each biography. Unfortunately, neither the website nor the consultation paper offer a list of members of the panel withbiographies that are grouped together for easy scanning.

One sidenote, big kudos to whomever decided this was a good idea (from the Review web page),

Important note: Submissions received by the panel will be made publicly available on this site as early as March 4, 2011.[emphases mine] * The name and organizational affiliation of the individual making the submission will be posted on the site; however, contact information (i.e., email addresses, phone numbers and postal addresses) will not be posted, unless that information is embedded in the submission itself.

This initiative can be viewed in two ways: (a) necessary housecleaning of funding programmes for research and development (R&D) that are not effective and (b) an attempt to kickstart more innovation, i.e. better ties between government R&D efforts and industry to achieve more productivity, in Canada. From the consultation paper‘s introduction,

WHY A REVIEW?

Innovation by business is a vital part of maintaining a high standard of living in Canada and building Canadian sources of global advantage. The Government of Canada plays an important role in fostering an economic climate that encourages business innovation, including by providing substantial funding through tax incentives and direct program support to enhance business research and development (R&D). Despite the high level of federal support, Canada continues to lag behind other countries in business R&D expenditures (see Figure 1), and this is believed to be a significant factor in contributing to the country’s weak productivity growth. Recognizing this, Budget 2010 announced a comprehensive review of federal support to R&D in order to maximize its contribution to innovation and to economic opportunities for business. (p. 1 print;  p. 3 PDF)

I’d like to offer a submission but I can’t for two reasons. (a)  I really don’t know much about the ‘housecleaning’ aspects. (b) The panel’s terms of reference vis à vis innovation are so constrained that any comments I could offer fall far outside it’s purview.

Here’s what I mean by ‘constrained terms of reference’ (from the consultation paper),

The Panel has been asked to provide advice related to the following questions:

§ What federal initiatives are most effective in increasing business R&D and facilitating commercially relevant R&D partnerships?

§ Is the current mix and design of tax incentives and direct support for business R&D and businessfocused R&D appropriate?

§ What, if any, gaps are evident in the current suite of programming, and what might be done to fill these gaps?

In addition, the Panel’s mandate specifies that its recommendations not result in an increase or decrease to the overall level of funding required for federal R&D initiatives. (p. 3 print; p. 5 PDF)

The ‘housecleaning’ effort is long overdue. Even good government programmes can outlive their usefulness while ineffective and/or bad programmes don’t get jettisoned soon enough or often enough. If you want a sense of just how complicated our current R & D funding system is, just check this out from Nassif Ghoussoub’s (Piece of Mind blog) Jan. 14, 2011 posting,

Now the number of programs that the government supports, and which are under review is simply mind boggling.

First, you have the largest piece of the puzzle, the $4-billion “Scientific Research and Experimental Develoment tax credit program” (SR&ED), which seems to be the big elephant in the room. I hardly know anything about this program, besides the fact that it is a federal tax incentive program, administered by the Canada Revenue Agency, that encourages Canadian businesses of all sizes, and in all sectors to conduct research and development in Canada. Former VP of the NRC and former President of Alberta Ingenuity, Peter Hackett, has lots to say about this. Also on youtube.

But you don’t need to be an expert to imagine the line-up of CEOs waiting to testify as to how important these tax incentives are to the country? “Paris vaut bien une messe” and a billion or four are surely worth testifying for.

Next, just take a look (below) at this illustrative list of more directly funded federal programs. Why “illustrative”?, because there is at least one hundred more!

Do you really think that anyone of the heads/directors/presidents (the shopkeepers!) of these programs (the shops!) are going to testify that their programs are deficient and need less funding? What about those individuals that are getting serious funding from these programs (the clients!)?

Nassif’s list is 50 (!) programmes long and he suggests there are another 100 of them? Yes, housecleaning is long overdue but as Nassif points out. the people most likely to submit comment about these programmes  are likely to be beneficiaries uninclined to see their demise.

There is another problem with this ‘housecleaning’ process in that they seem to be interested in ‘tweaking’ rather than renovating or rethinking the system. Rob Annan at the Researcher Forum (Don’t leave Canada behind) blog, titled his Feb. 4, 2011 post, Innovation vs. Invention, as he questions what we mean by innovation (excerpt from his posting),

I wonder if we’ve got the whole thing wrong.

The fact is: universities don’t produce innovation. For that matter, neither does industrial R&D.

What university and industrial research produces is invention.

The Blackberry is not an innovation, it’s an invention. A new cancer-fighting drug is not an innovation, it’s an invention. A more durable prosthetic knee is not an innovation, it’s an invention.

Universities can – and do – produce inventions.

In fact, they produce inventions at an astonishing rate. University tech transfer offices (now usually branded as “centres for innovation and commercialization”) register more intellectual property than could ever be effectively commercialized.

But innovation is distinct from invention. Innovation is about process.

Innovation is about finding more efficient ways to do things. Innovation is about increasing productivity. Innovation is about creating new markets – sometimes through the commercialization of inventions.

Innovation is about the how not about the what.

Thought-provoking, yes? I think a much broader scope needs to be taken if we’re going really discuss innovation in Canada. I’m talking about culture and making a cultural shift. One of the things I’ve noticed is that everyone keeps saying Canadians aren’t innovative. Fair enough. So, how does adding another government programme change that? As far as I can tell, most of the incentives that were created have simply encouraged people to game the system, which is what you might expect from people who aren’t innovative.

I think one of the questions that should have been asked is, how do you encourage the behaviour, in this case a cultural shift towards innovation, you want when your programmes haven’t elicited that behaviour?

Something else I’d suggest, let’s not confine the question(s) to the usual players as they’ll be inclined to offer more of the same. (There’s an old saying, if you’re a hammer, everything looks like a nail.)

Another aspect of making a cultural shift is modeling at least some of the behaviours. Here’s something what Dexter Johnson at the Nanoclast blog (IEEE Spectrum) noticed about US President Barack Obama’s January 2011 State of the Union address in his January 28, 2011 posting,

Earlier this week in the President’s State of the Union Address, a 16-year-old girl by the name Amy Chyao accompanied the First Lady at her seat.

No doubt Ms. Chyao’s presence was a bit of stage craft to underscore the future of America’s ingenuity and innovation because Ms. Chyao, who is still a high school junior, managed to synthesize a nanoparticle that when exposed to infrared light even when it is inside the body can be triggered like a bomb to kill cancer cells. [emphasis mine] Ms. Chyao performed her research and synthesis in the lab of Kenneth J. Balkus, Jr., a chemistry professor at the University of Texas at Dallas.

This is a remarkable achievement and even more so from someone still so young, so we would have to agree with Prof. Balkus’ assessment that “At some point in her future, she’ll be a star.”

However, Chyao was given to us as a shining example of the US potential for innovation, and, as a result, its competitiveness. So beyond stage craft, what is the assessment of innovation for the US in a time of emerging technologies such as nanotechnology? [emphasis mine]

As President Obama attempts to rally the nation with “This is our Sputnik moment”, Andrew Maynard over on his 20/20 blog tries to work out what innovation means in our current context as compared to what it meant 50 years ago at the dawn of the space race.

Notice the emphasis on innovation. Our US neighbours are as concerned as we are about this and what I find interesting is that there glimmers of a very different approach. Yes, Chyao’s presence was stagecraft but this kind of ‘symbolic communication’ can be incredibly important. I say ‘can’ because if it’s purely stagecraft then it will condemned as a cheap stunt but if they are able to mobilize ‘enough’ stories, programmes, education, etc. that support the notion of US ingenuity and innovation then you can see a cultural shift occur. [Perfection won’t be achieved; there will be failures. What you need are enough stories and successes.] Meanwhile, Canadians keep being told they’re not innovative and ‘we must do something’.

This US consultation may be more stagecraft but it shows that not all consultations have to be as thoroughly constrained as the Canadian one finishing today.  From Mike Masnick’s Feb. 9, 2011 posting (The White House Wants Advice On What’s Blocking American Innovation) on Techdirt,

The White House website kicked off a new feature this week, called Advise the Advisor, in which a senior staff member at the White House will post a YouTube video [there’s one in this posting on the Techdirt website] on a particular subject, asking the public to weigh in on that topic via a form. The very first such topic is one near and dear to our hearts: American Innovation. [emphasis mine] …

And here is the answer I provided:

Research on economic growth has shown time and time again the importance of basic innovation towards improving the standard of living of people around the world. Economist Paul Romer’s landmark research into innovation highlighted the key factor in economic growth is increasing the spread of ideas.

Traditionally, many people have considered the patent system to be a key driver for innovation, but, over the last few decades, research has repeatedly suggested that this is not the case. In fact, patents more frequently act as a hindrance to innovation rather than as a help to it. Recent research by James Bessen & Michael Meurer (reviewing dozens of patent studies) found that the costs of patents far outweigh the benefits.

This is a problem I see daily as the founder of a startup in Silicon Valley — often considered one of the most innovative places on earth. Patents are not seen as an incentive to innovation at all. Here, patents are simply feared. The fear is that anyone doing something innovative will be sued out of nowhere by someone with a broad patent. A single patent lawsuit can cost millions of dollars and can waste tons of resources that could have gone towards actual innovation. Firms in Silicon Valley tend to get patents solely for defensive purposes.

Getting back to Dexter, there is one other aspect of his comments that should be considered, the emphasis on ’emerging technologies’. The circumstances in which we currently find ourselves are hugely different than they were during the Industrial revolution, the arrival of plastics and pesticides, etc. We understand our science and technology and their impacts quite differently than we did even a generation ago and that requires a different approach to innovation than the ones we’ve used in the past. From Andrew Maynard’s Jan. 25, 2011 posting (2020 Science blog),

… if technology innovation is as important as Obama (and many others besides) believes it is, how do we develop the twenty first century understanding, tools and institutions to take full advantage of it?

One thing that is clear is that in connecting innovation to action, we will need new insights and “intelligence” on how to make this connection work in today’s world. These will need to address not only the process of technology innovation, but also how we develop and use it within an increasingly connected society, where more people have greater influence over what works – and what doesn’t – than ever before. This was the crux of a proposal coming out of the World Economic Forum Global Redesign Agenda earlier this year, which outlined the need for a new Global Center for Emerging Technologies Intelligence.

But beyond the need for new institutions, there is also the need for far more integrated approaches to building a sustainable future through technology innovation – getting away from the concept of technology innovation as something that is somebody else’s business, and making it everybody’s business. This was a central theme in the World Economic Forum report that Tim Harper of CIENTIFICA Ltd. and I published last week.

There’s a lot more to be said about the topic. Masnick did get a response of sorts to his submission about US innovation (from his Feb. 17, 2011 posting on Techdirt),

Tony was the first of a bunch of you to send over the news that President Obama’s top advisor, David Plouffe, has put up a blog post providing a preliminary overview of what he “heard” via the Ask the Advisor question, which we wrote about last week, concerning “obstacles to innovation.” The only indication that responses like mine were read was a brief mention about how some people complained about how the government, and particularly patent policy, got in the way of innovation:

Many respondents felt that too much government regulation stifled businesses and innovators and that the patent process and intellectual property laws are broken.

Unfortunately, rather than listening to why today’s patent system is a real and significant problem, it appears that Plouffe is using this to score political points for his boss …

Masnick hasn’t lost hope as he goes on to note in his posting.

For yet another perspective, I found Europeans weighed in on the innovation topic at the American Association for the Advancement of Science (AAAS) 2011 annual meeting this morning (Feb. 18, 2011). From a Government of Canada science blog (http://blogs.science.gc.ca/) posting, Mobilizing resources for research and innovation: the EU model, by Helen Murphy,

EU Commission Director-General of the Joint Research Centre Robert-Jan Smits spoke about what all countries agree on: that research and innovation are essential to prosperity — not just now, but even more so in the future.

He said European leaders are voicing the same message as President Obama, who in his recent State of the Union address linked innovation to “winning the future” — something he called the “Sputnik movement of our generation.”

Smits talked about the challenge of getting agreement among the EU’s 27 member countries on a growth strategy. But they have agreed; they’ve agreed to pursue growth that is smart (putting research and innovation at centre stage), sustainable (using resources efficiently and responsibly) and inclusive (leaving no one behind and creating new jobs).

The goal is ambitious: the EU aims to create nearly four million new jobs in Europe and increase the EU’s GDP by 700 billion Euros by 2025.

What I’m trying to say is that innovation is a big conversation and I hope that the expert panel for Canada’s current consultation on this matter will go beyond its terms reference to suggest that ‘housecleaning and tweaking’ should be part of a larger initiative that includes using a little imagination.

Thinking about nanotechnology, synthetic biology, body hacking, corporate responsibility, and zombies

In the wake of Craig Venter’s announcement (last week) of the creation of a synthetic organism (or most of one), Barack Obama, US President, has requested a special study (click here to see the letter to Dr. Amy Gutmann of the Presidential Commission for the Study of Bioethical Issues). From Andrew Maynard’s 2020 Science blog (May 26, 2010) posting,

It’s no surprise therefore that, hot on the heels of last week’s announcement, President Obama called for an urgent study to identify appropriate ethical boundaries and minimize possible risks associated with the breakthrough.

This was a bold and important move on the part of the White House. But its success will lie in ensuring the debate over risks in particular is based on sound science, and not sidetracked by groundless speculation.

The new “synthetic biology” epitomized by the Venter Institute’s work – in essence the ability to design new genetic code on computers and then “download” it into living organisms – heralds a new era of potentially transformative technology innovation. As if to underline this, the US House of Representatives Committee on Energy and Commerce will be hearing testimony from Craig Venter and others on the technology’s potential on May 27th – just days after last week’s announcement.

Andrew goes on to suggest while the ethical issues are very important that safety issues should not be shortchanged,

The ethics in particular surrounding synthetic biology are far from clear; the ability to custom-design the genetic code that resides in and defines all living organisms challenges our very notions of what is right and what is acceptable. Which is no doubt why President Obama wasted no time in charging the Presidential Commission for the Study of Bioethical Issues to look into the technology.

But in placing ethics so high up the agenda, my fear is that more immediate safety issues might end up being overlooked.

Hilary Sutcliffe in an opinion piece for ethicalcorp.com (writing to promote her organization’s [MATTER] Corporate responsibility and emerging technologies conference on June 4, 2010) suggests this,

Though currently most of the attention is focused on the scientists exploring synthetic biology in universities, this will also include the companies commercialising these technologies.

In addition, many organisations may soon have to consider if and how they use the applications developed using these new technologies in their own search for sustainability.

This is definitely an issue for the ‘Futures’ area of your CSR [corporate social responsibility] strategy, but there is a new ‘ology’ which is being used in products already on the market which may need to be moved up your priority list – ‘Nanotechnology’ or (‘nanotechnologies’ to be precise) – nano for short.

What I’m doing here is drawing together synthetic biology, nanotechnology, safety, and corporate social responsibility (CSR). What follows is an example of a company that apparently embraced CSR.

In the wake of BP’s (British Petroleum) disastrous handling of the Gulf of Mexico oil spill, the notion of corporate social responsibility and  ethics and safety issues being considered and discussed seriously seems unlikely. Sure, there are some smaller companies that act on on those values but those are the values of an owner and are not often seen in action in a larger corporate entity and certainly not in a multinational enterprise such as BP.

Spinwatch offers an intriguing perspective on corporate social responsibility in an article by Tom Borelli,

To demonstrate “responsibility”, BP spent huge sums of money on an advertising campaign promoting the notion that fossil fuel emissions of carbon dioxide is to blame for global warming and its investment in renewable energy was proof the company was seeking a future that was “beyond petroleum”.

The message was clear: oil is bad for society and BP is leading the way in alternative energy.

The BP experience shows there are serious consequences when companies demagogue against its core business. …

… “If you drew up a list of companies that Americans are most disappointed in, BP would definitely feature,” said James Hoopes, professor of business ethics at Babson College, Massachusetts.

Ironically, BP’s experience delivered the exact opposite of CSR’s promise: the company’s reputation was ruined, the company is the target of government agency investigations and Congressional hearings and its stock price lags far behind its competitors and the S&P 500.

Unfortunately, in the aftermath of BP’s failures, many critics blamed corporate greed – not CSR – as the cause. They believed the profit motive forced the company to skimp on basic pipeline maintenance and worker safety.

This conclusion is far from the truth. If profit were its only goal, BP would define its role in society as a company that safely producing oil while providing jobs and energy for the economy.

This article was written in 2006 and presents a view that would never have occurred to me. I find Borelli’s approach puzzling as it seems weirdly naïve. He seems to be unaware that large companies can have competing interests and while one part of an enterprise may be pursuing genuine corporate social responsibility another part of the enterprise may be pursuing goals that are antithetical to that purpose. Another possibility is that the company was cynically pursing corporate social responsibility in the hope that it would mitigate any backlash in the event of a major accident.

Getting back to where this started, I think that nanotechnology, synthetic biology and other emerging technologies require all of the approaches to  ethics, safety rules, corporate social responsibility, regulatory frameworks, and more that we have and can dream up including this from Andrew (from May 26, 2010 posting),

Rather, scientists, policy makers and developers urgently need to consider how synthetic biology might legitimately lead to people and the environment being endangered, and how this is best avoided.

What we need is a science-based dialogue on potential emergent risks that present new challenges, the plausibility of these risks leading to adverse impacts, and the magnitude and nature of the possible harm that might result. Only then will we be able to develop a science-based foundation on which to build a safe technology.

Synthetic biology is still too young to second-guess whether artificial microbes will present new risks; whether bio-terror or bio-error will result in harmful new pathogens; or whether blinkered short-cuts will precipitate catastrophic failure. But the sheer momentum and audacity of the technology will inevitably lead to new and unusual risks emerging.

And this is precisely why the safety dialogue needs to be grounded in science now, before it becomes entrenched in speculation.

You can read more about the science behind Venter’s work in this May 22, 2010 posting by Andrew and Gregor Wolbring provides an excellent roundup of the commentary on Venter’s latest achievement.

I agree we need the discussion but grounding the safety dialogue in science won’t serve as a prophylactic treatment for public panic. I believe that there is always an underlying anxiety about science, technology, and our place in the grand scheme of things. This anxiety is played out in various horror scenarios. I don’t think it’s an accident that interest in vampires, werewolves, and zombies is so high these days.

I had a minor epiphany—a reminder of sorts—the other night watching Zombiemania ( you can read a review of this Canadian documentary here) when I heard the pioneers,  afficionados and experts comment on the political and social implications of zombie movies (full disclosure: I’m squeamish  so I had to miss parts of the documentary).This fear of losing control over nature and destroying the natural order (reversing death as zombies and vampires do) and the worry over the consequences of augmenting ourselves (werewolves, zombies and vampires are stronger than ordinary humans who become their prey) is profound.

Venter’s feat with the bacterium may or may not set off a public panic but there is no question in my mind that at least one will occur as synthetic biology, biotechnology, and nanotechnology take us closer to real life synthetic and transgenic organisms, androids and robots (artificial humans), and cyborgs (body hackers who integrate machines into their bodies).

Let’s proceed with the discussions about safety, ethics, etc. on the assumption that there will be a public panic. Let’s make another assumption, the public panic will be set off by something unexpected. For the final assumption, a public panic may be just what we need. That final comment has been occasioned by Schumpeter’s notion of ‘creative destruction’ (Wikipedia essay here). While the notion is grounded in economics, it has a remarkably useful application as a means of understanding social behaviour.