Tag Archives: risk management principles for nanotechnology

Incremental regulation and nanotechnology

I think today will be the end of this series. So, for the last time, the article is ‘Risk Management Principles for Nanotechnology’ by Gary E. Marchant, Douglas J. Sylvester, and Kenneth W. Abbott in Nanoethics, 2008, vol. 2, pp. 43-60.

The authors contend that the regulatory model proposed by Ayers and Braithwaite (discussed in yesterday’s post) is not sufficiently flexible to accommodate nanotechnology as their model assumes

“a fully developed regulatory system that can effectively manage a particular set of risks. …  advanced nations with  highly developed legal systems in which legislatures and agencies can create, communicate, and utilize a range of regulatory options. … high levels of information and understanding on the part of regulators. p. 52

In turn, the authors are proposing a refinement of the Ayers/Braithwaite model, ‘Incremental Regulation’, which they describe by using an example from the US Environmental Protection Agency (EPA)

The EPA Nanomaterials Stewardship Program reflects precisely the approach we espouse here: begin with information gathering and assessment, encourage experiments with self-regulation and multi-stakeholder norms, move gradually to greater governmental involvement to standardize, scale up and supervise voluntary programs, perform all the steps with high levels of transparency and participation, and over time build up a regulatory end state that retains the best of these voluntary mechanisms … along with formal regulation …, as required. p. 57

Seems more like a plea to ‘go slow’ rather than rush to regulating before you understand the implications. The approach seems reasonable enough. Of course, implementing these ideas always provides a stumbling block. I’ve worked in enough jobs where I’ve had to invoke policy in situations that the policy makers never envisioned due to the fact [1] they had no practical experience and [2] it’s impossible to create policies that cover every single contingency. That’s kind of a big problem with nanotechnology, none of us has much practical experience with it and I think the question that hasn’t been addressed is whether or not we are willing to take chances. Then we need to figure out what kind, how long, and who will be taking the chances? More soon.

Inspiration for a new approach to risk regulation for nanotechnology

I’m getting into the home stretch now regarding the ‘Risk Management Principle for Nanotechnology’ article. After dealing with the ‘classic’ risk principles and the newer precautionary principles, the authors (Marchant, Sylvester, and Abbott) unveil a theory for their proposed ‘new principles’. The theory is based on work by I. Ayres and J. Braithwaite on something they call, ‘Responsive Regulation’. Briefly, they suggest avoiding the regulation/deregulation debate in favour of a flexible regulatory approach where a range of strategies are employed.

With this tool kit [range of strategies] in hand, regulators can play a tit-for-tat strategy: they allow firms to self-regulate so long as the firms reciprocate with responsible action; if instead some firms act opportunistically, regulators respond to the defectors with appropriate penalties and more stringent regulation. p. 52 (Nanoethics, 2008, vol. 2 pp. 43-60

There are some difficulties associated with this approach but that is being saved for my next posting in this series.

The Project on Emerging Nanotechnologies has two events coming up. ‘Synthetic Biology: Is Ethics a Showstopper?’ on Thursday, January 8, 2009 from 12:30 pm – 1:30 pm (EST). For information on location (you have to RSVP) or how to attend via webcast (no RSVP required), check here. The other event is called, ‘Nanotech and Your Daily Vitamins; Barriers to Effective FDA Regulation of Nanotechnology-Based Dietary Supplements’ and will be held on Thursday, January 15 (?) from 9:30 am – 10:30 am (EST). The date listed on their website and in their invitation is January 14, which is incorrect. I imagine they’ll correct either the date or date soon. For more details about the event itself, the physical location (If you’re planning to go, please RSVP), or the webcast directions (RSVP) not required) please check here.

The availability heuristic and the perception of risk

It’s taking a lot longer to go through the Risk Management Principles for Nanotechnology article than I expected. But, let’s move onwards. “Availability” is the other main heuristic used when trying to understand how people perceive risk. This one is about how we assess the likelihood of one or more risks.

According to researchers, individuals who can easily recall a memory specific to a given harm are predisposed to overestimating the probability of its recurrence, compared to to other more likely harms to which no memory is attached. p. 49 in Nanoethics, 2008, vol. 2

This memory extends beyond your personal experience (although it remains the most powerful) all the way to reading or hearing about an incident.  The effect can also be exacerbated by imagery and social reinforcement. Probably the most powerful, recent example would be ‘frankenfoods’. We read about the cloning of Dolly the sheep who died soon after her ‘brith’, there was the ‘stem cell debate, and ‘mad cow disease’ which somehow got mixed together in a debate on genetically modified food evolving into a discussion about biotechnology in general. The whole thing was summed as ‘frankenfood’ a term which fused a very popular icon of science gone mad, Frankenstein, with the food we put in our mouths. (Note: It is a little more complicated than that but I’m not in the mood to write a long paper or dissertation where every nuance and development is discussed.) It was propelled by the media and activists had one of their most successful campaigns.

Getting back to ‘availability’ it is a very powerful heuristic to use when trying to understand how people perceive risk.

The thing with ‘frankenfoods’ is that wasn’t planned. Susan Tyler Hitchcock in her book, ‘Frankensein; a cultural history’ (2007), traces the birth of the term in a 1992 letter written by Paul Lewis to the New York Times through to its use as a clarion cry for activists, the media, and a newly worried public. Lewis coined the phrase and one infers from the book that it was done casually. The phrase was picked up by other media outlets and other activists (Lewis is both a professor and an activist). For the full story, check out Tyler’s book pp. 288-294.

I have heard the ETC Group as being credited with the ‘frankenfoods’ debate and pushing the activist agenda. While they may have been active in the debate, I have not been able to find any documentation to support the contention that the ETC Group made it happen. (Please let me know if you have found something.)

The authors (Marchant, Sylvester, and Abbott) of this risk management paper feel that nanotechnology is vulnerable to the same sort of cascading effects that the ‘availability’ heuristic provides a framework for understanding. Coming next, a ‘new’ risk management model.

The precautionary principle and a bit about the ‘culture wars’

I was sick for a while there but now I’m back. The article I’ve been talking about is “Risk Management Principles for Nanotechnology” by Gary E. Marchant, Douglas J. Sylvester and Kenneth W. Abbott. The precautionary principle according to the article ‘is often summarized by the phrase ‘better safe than sorry’.” In other words, if there’s a possibility that something bad will happen, don’t do it. As you might expect this seems like a problematic principle to implement. Do you sit around imagining disaster scenarios or do you tell yourself everything will be fine? How do you determine the level of possible risk?

One of the reasons I was so interested in the event that the Project on Emerging Nanotechnologies had organized with L’Oreal (cosmetics firm) was that the company representative would be discussing how they were implementing the precautionary principle when developing and selling their nanotechnology-based cosmetics. Unfortunately, that event has yet to be rescheduled.

The subject of risk is quite topical right now due to an article from the folks at Yale Law School’s Cognition Project (in cooperation with the Project on Emerging Nanotechnologies) that’s just been published in Nature Nanotechnology and which apparently predicts ‘culture wars’. (I read an earlier version of the work online and cited it in a presentation for the 2008 Cascadia Nanotechnology Symposium.) The major thrust of the work at Yale was that people will consider the benefits and risks of an emerging technology (in this case, nanotechnology) according to their cultural values. They used anthropologist Mary Douglas’s two cross-cutting dimensions of culture to explain what they mean by culture. On one axis you have hierarchy/egalitarianism and on the other axis you have individualism/egalitarianism. One of the findings in the paper is that it doesn’t matter how much information you receive (this relates to the notion of science literacy where if you educate people about the technology they will come to accept it and its attendant risks) since your opinion of the technology is more strongly influenced by your cultural values as they are measured on those two axes. I think at least some of this work is a response to the city of Berkeley’s law regulating nanotechnology research. The legislation was passed unusually quickly and, I believe, it was the first such legislation in the US.

Concurrently published in Nature Nanotechnology with the ‘culture wars’ article is an article by Dietram Scheufele where he discusses how ‘religion’ or ‘values’ have an impact on attitudes towards nanotechnology. I think this article is based on some of the material he presented last year at the 2007 American Association for the Advancement of Science annual meeting.

Why assess nano risks?

It seems like there’s a pretty obvious answer…because it could be dangerous…but risk is usually discussed along with regulatory oversight and policy which has implications for research funding, consumer acceptance, and more. So back the article on risk management where they cite three traditional risk management principles

(a) acceptable risk (b) cost-benefit analysis, and (c) feasbility (or best avaialble technology).

In acceptable risk, you figure what the risks are and then work to minimize them until you have a technology with an acceptable level of risk. In a cost-benefit analysis, you determine if the benefits and the costs are equal, there are more benefits than costs, or there are more costs than benefits. This is generally speaking, bottom-line driven. In the third principle, feasibility (or best available technology), skips over any analysis of risk (unlike the other two principles), according the article,

This approach, which requres reduction of risks to the lowerst level technoligcally or ecooomically feasible, has the advantage of not requiring information about risks or benefits.

This one has me a little confused as it suggests that the risk has already been assessed somehow but this is no longer brought into the final equation. In other words, if we decide that steam is superior to electricity as an agent for power, we haven’t discussed the risks per se but this is implied when determining the most feasible technology for the job.

There are more principles to come tomorrow including the precautionary principle.