Tag Archives: Andrew Maynard

Medusa, jellyfish, and tissue engineering

The ‘Medusoid’ is a reverse- tissue-engineered jellyfish designed by a collaborative team of researchers based, respectively, at the California Institute of Technology (Caltech) and Harvard University. From the July 22, 2012 news item on ScienceDaily,

When one observes a colorful jellyfish pulsating through the ocean, Greek mythology probably doesn’t immediately come to mind. But the animal once was known as the medusa, after the snake-haired mythological creature its tentacles resemble. The mythological Medusa’s gaze turned people into stone, and now, thanks to recent advances in bio-inspired engineering, a team led by researchers at the California Institute of Technology (Caltech) and Harvard University have flipped that fable on its head: turning a solid element—silicon—and muscle cells into a freely swimming “jellyfish.”

“A big goal of our study was to advance tissue engineering,” says Janna Nawroth, a doctoral student in biology at Caltech and lead author of the study. “In many ways, it is still a very qualitative art [emphasis mine], with people trying to copy a tissue or organ just based on what they think is important or what they see as the major components—without necessarily understanding if those components are relevant to the desired function or without analyzing first how different materials could be used.” Because a particular function—swimming, say—doesn’t necessarily emerge just from copying every single element of a swimming organism into a design, “our idea,” she says, “was that we would make jellyfish functions—swimming and creating feeding currents—as our target and then build a structure based on that information.”

Oops! I’m not sure why Nawroth uses the word ‘qualitative’ here. It’s certainly inappropriate given my understanding of the word. Here’s my rough definition, if anyone has anything better or can explain why Nawroth used ‘qualitative’  in that context, please do comment. I’m going to start by contrasting qualitative with quantitative, both of which I’m going to hugely oversimplify. Quantitative data offers numbers, e.g. 50,000 people committed suicide last year. Qualitative data helps offer insight into why. Researchers can obtain the quantitative data from police records, vital statistics, surveys, etc. where qualitative data is gathered from ‘story-oriented’ or highly detailed personal interviews. ( I would have used ‘hit or miss,’ ‘guesswork,’ or simply used the word art without qualifying it  in this context.)

The originating July 22, 2012 news release from Caltech goes on to describe why jellyfish were selected and how the collaboration between Harvard and Caltech came about,

Jellyfish are believed to be the oldest multi-organ animals in the world, possibly existing on Earth for the past 500 million years. Because they use a muscle to pump their way through the water, their function—on a very basic level—is similar to that of a human heart, which makes the animal a good biological system to analyze for use in tissue engineering.

“It occurred to me in 2007 that we might have failed to understand the fundamental laws of muscular pumps,” says Kevin Kit Parker, Tarr Family Professor of Bioengineering and Applied Physics at Harvard and a coauthor of the study. “I started looking at marine organisms that pump to survive. Then I saw a jellyfish at the New England Aquarium, and I immediately noted both similarities and differences between how the jellyfish pumps and the human heart. The similarities help reveal what you need to do to design a bio-inspired pump.”

Parker contacted John Dabiri, professor of aeronautics and bioengineering at Caltech—and Nawroth’s advisor—and a partnership was born. Together, the two groups worked for years to understand the key factors that contribute to jellyfish propulsion, including the arrangement of their muscles, how their bodies contract and recoil, and how fluid-dynamic effects help or hinder their movements. Once these functions were well understood, the researchers began to design the artificial jellyfish.

Here’s how they created the ‘Medusoid’ (artificial jellyfish, from the July 22, 2012 Harvard University news release on EurekAlert,

To reverse engineer a medusa jellyfish, the investigators used analysis tools borrowed from the fields of law enforcement biometrics and crystallography to make maps of the alignment of subcellular protein networks within all of the muscle cells within the animal. They then conducted studies to understand the electrophysiological triggering of jellyfish propulsion and the biomechanics of the propulsive stroke itself.

Based on such understanding, it turned out that a sheet of cultured rat heart muscle tissue that would contract when electrically stimulated in a liquid environment was the perfect raw material to create an ersatz jellyfish. The team then incorporated a silicone polymer that fashions the body of the artificial creature into a thin membrane that resembles a small jellyfish, with eight arm-like appendages.

Using the same analysis tools, the investigators were able to quantitatively match the subcellular, cellular, and supracellular architecture of the jellyfish musculature with the rat heart muscle cells.

The artificial construct was placed in container of ocean-like salt water and shocked into swimming with synchronized muscle contractions that mimic those of real jellyfish. (In fact, the muscle cells started to contract a bit on their own even before the electrical current was applied.)

“I was surprised that with relatively few components—a silicone base and cells that we arranged—we were able to reproduce some pretty complex swimming and feeding behaviors that you see in biological jellyfish,” says Dabiri.

Their design strategy, they say, will be broadly applicable to the reverse engineering of muscular organs in humans.

For future research direction I’ve excerpted this from the Caltech news release,

The team’s next goal is to design a completely self-contained system that is able to sense and actuate on its own using internal signals, as human hearts do. Nawroth and Dabiri would also like for the Medusoid to be able to go out and gather food on its own. Then, researchers could think about systems that could live in the human body for years at a time without having to worry about batteries because the system would be able to fend for itself. For example, these systems could be the basis for a pacemaker made with biological elements.

“We’re reimagining how much we can do in terms of synthetic biology,” says Dabiri. “A lot of work these days is done to engineer molecules, but there is much less effort to engineer organisms. I think this is a good glimpse into the future of re-engineering entire organisms for the purposes of advancing biomedical technology. We may also be able to engineer applications where these biological systems give us the opportunity to do things more efficiently, with less energy usage.”

I think this excerpt from the Harvard news release provides some insight into at least some of the motivations behind this work,

In addition to advancing the field of tissue engineering, Parker adds that he took on the challenge of building a creature to challenge the traditional view of synthetic biology which is “focused on genetic manipulations of cells.” Instead of building just a cell, he sought to “build a beast.”

A little competitive, eh?

For anyone who’s interested in reading the research (which is behind a paywall), from the ScienceDaily news item,

Janna C Nawroth, Hyungsuk Lee, Adam W Feinberg, Crystal M Ripplinger, Megan L McCain, Anna Grosberg, John O Dabiri & Kevin Kit Parker. A tissue-engineered jellyfish with biomimetic propulsion. Nature Biotechnology, 22 July 2012 DOI: 10.1038/nbt.2269

Andrew Maynard weighs in on the matter with his July 22, 2012 posting titled, We took a rat apart and rebuilt it as a jellyfish, on the 2020Science blog (Note: I have removed links),

 Sometimes you read a science article and it sends a tingle down your spine. That was my reaction this afternoon reading Ed Yong’s piece on a paper just published in Nature Biotechnology by Janna Nawroth, Kevin Kit Parker and colleagues.

The gist of the work is that Parker’s team have created a hybrid biological machine that “swims” like a jellyfish by growing rat heart muscle cells on a patterned sheet of polydimethylsiloxane.  The researchers are using the technique to explore muscular pumps, but the result opens the door to new technologies built around biological-non biological hybrids.

Ed Yong’s July 22, 2012 article for Nature (as mentioned by Andrew) offers a wider perspective on the work than is immediately evident in either of the news releases (Note: I have removed a footnote),

Bioengineers have made an artificial jellyfish using silicone and muscle cells from a rat’s heart. The synthetic creature, dubbed a medusoid, looks like a flower with eight petals. When placed in an electric field, it pulses and swims exactly like its living counterpart.

“Morphologically, we’ve built a jellyfish. Functionally, we’ve built a jellyfish. Genetically, this thing is a rat,” says Kit Parker, a biophysicist at Harvard University in Cambridge, Massachusetts, who led the work. The project is described today in Nature Biotechnology.

….

“I think that this is terrific,” says Joseph Vacanti, a tissue engineer at Massachusetts General Hospital in Boston. “It is a powerful demonstration of engineering chimaeric systems of living and non-living components.”

Here’s a video from the researchers demonstrating the artificial jellyfish in action,

There’s a lot of material for contemplation but what I’m going to note here is the difference in the messaging. The news releases from the ‘universities’ are very focused on the medical application where the discussion in the science community revolves primarily around the synthetic biology/bioengineering elements. It seems to me that this strategy can lead to future problems with a population that is largely unprepared to deal with the notion of mixing and recombining  genetic material or demonstrations of “of engineering chimaeric systems of living and non-living components.”

Science communication at the US National Academy of Sciences

I guess it’s going to be a science communication kind of day on this blog. Dr. Andrew Maynard on his 2020 Science blog posted a May 22, 2012 piece about a recent two-day science communication event at the US National Academy of Sciences in Washington, DC.

Titled The Science of Science Communication and held May 21 – 22, 2012, I was a little concerned about the content since it suggests a dedication to metrics (which are useful but I find often misused) and the possibility of a predetermined result for science communication. After watching a webcast of the first session (Introduction and Overviews offered by Baruch Fischhof [Carnegie Mellon University] and Dietram Scheufele [University of Wisconsin at Madison], 55:35 mins.), I’m relieved to say that the first two presenters mostly avoided those pitfalls.

You can go here to watch any of the sessions held during that two days, although I will warn you that these are not TED talks. The shortest are roughly 27 mins. with most running over 1 hour, while a couple  of them run over two hours.

Getting back to Andrew and his take on the proceedings, excerpted from his May 22, 2012 posting,

It’s important that the National Academies of Science are taking the study of science communication (and its practice) seriously.  Inviting a bunch of social scientists into the National Academies – and into a high profile colloquium like this – was a big deal.  And irrespective of the meeting’s content, it flags a commitment to work closely with researchers studying science communication and decision analysis to better ensure informed and effective communication strategies and practice.  Given the substantial interest in the colloquium – on the web as well as at the meeting itself – I hope that the National Academies build on this and continue to engage fully in this area.

Moving forward, there needs to be more engagement between science communication researchers and practitioners.  Practitioners of science communication – and the practical insight they bring – were notable by their absence (in the main) from the colloquium program.  If the conversation around empirical research is to connect with effective practice, there must be better integration of these two communities.

It’s interesting to read about the colloquia (the science communication event was one of a series events known as the Arthur M. Sackler Colloquia) from the perspective of a someone who was present in real time.

Where do all those particles go or what does degradable mean at the nanoscale?

Scientists at Switzerland’s ETH Zurich (Swiss Federal Institute of Technology Zurich) note that cerium oxide nanoparticles do not degrade. From the May 21, 2012 article by Simone Ulmer on the ETH Zurich website,

Tiny particles of cerium oxide do not burn or change in the heat of a waste incineration plant. They remain intact on combustion residues or in the incineration system, as a new study by researchers from ETH Zurich reveals.

Over 100 million tons of waste are incinerated worldwide every year. Due to the increasing use of nanoparticles in construction materials, paints, textiles and cosmetics, for instance, nanoparticles also find their way into incineration plants. What happens to them there, however, had not been investigated until now. Three ETH-Zurich teams from fields of chemistry and environmental engineering thus set about finding out what happens to synthetic nano-cerium oxide during the incineration of refuse in a waste incineration plant. Cerium oxide itself is a non-toxic ceramic material, not biologically degradable and a common basic component in automobile catalytic converters and diesel soot filters.

Here’s their reasoning (from Ulmer’s article),

Experts fear that non-degradable nanomaterials might be just as harmful for humans and the environment as asbestos. As yet, however, not enough is known about the properties of nanomaterials (see ETH Life, 25 March 2010). One thing is for sure: they differ greatly from larger particles of the same material. Nanoparticles are more mobile and have a different surface structure. Knowledge of these properties is important with the increasing use of nanomaterials as, as they are transferred through incineration plants or sewage, and as they are absorbed by people in food (see ETH Life, 15 July 2008) and perhaps even through the skin and respiration, and can thus enter the body. [emphases mine]

Recent research suggests that there are many, many naturally occurring nanoparticles which we and other living beings have been innocently ingesting for millenia as noted in my Feb. 9, 2012 posting and my Nov. 24, 2011 posting. More recently, Dr. Andrew Maynard at his 2020 Science blog posted about carbon nanoparticles, which are  ubiquitous. From Andrew’s May 19, 2012 posting,

This latest paper was published in the journal Science Progress a few weeks ago, and analyzes the carbon nanoparticle content of such everyday foods as bread, caramelized sugar, corn flakes and biscuits.  The authors found that products containing caramelized sugar – including baked goods such as bread – contained spherical carbon nanoparticles in the range 4 – 30 nm (with size being associated with the temperature of caramelization).

Getting back to the cerium oxide project, here’s what the Swiss scientists found (from Ulmer’s article),

The researchers’ tests revealed that cerium oxide does not change significantly during incineration. The fly-ash separation devices proved extremely efficient: the scientists did not find any leaked cerium oxide nanoparticles in the waste incineration plant’s clean gas. That said, the nanoparticles remained loosely bound to the combustion residues in the plant and partially in the incineration system, too. The fly ash separated from the flue gas also contained cerium oxide nanoparticles.

Nowadays, combustion residues – and thus the nanoparticles bound to them – end up on landfills or are reprocessed to extract copper or aluminium, for instance. The researchers see a need for action here. “We have to make sure that new nanoparticles don’t get into the water and food cycle via landfills or released into the atmosphere through further processing measures,” says Wendelin Stark, head of the study and a professor of chemical engineering at ETH Zurich. Moreover, the fact that nanoparticles that could be inhaled if inadequate protection is worn might be present in the incineration system needs to be taken into consideration during maintenance work.

I have a couple questions for the researchers. First, is nanoscale cerium dioxide dangerous and do you have any studies?  Second, does anything ever degrade? As I recall (dimly), matter cannot be destroyed. Are they trying to break down the nanoscale cerium oxide to a smaller scale? And, what would the impact be then?

All in all, this is very interesting research to me as it has raised some questions in a way I had not previously considered. Thanks to Nanowerk where I found the May 24, 2012 news item that alerted me to the article.

Caution and nanoscale zinc oxide in sunscreens

While I’ve had my reservations about the anti-nanoscreen campaigning, it is important to remember that safety research into the use of nanoparticles in sunscreens is ongoing. A new piece of research on nanoscale zinc oxide and sunscreens has been performed at the Missouri University of Science and Technology and this is something I would put under the category of interesting, possibly disturbing, and not at all definitive.

From the May 8, 2012 news item on Nanowerk,

… researchers at Missouri University of Science and Technology are discovering that sunscreen may not be so safe after all. Cell toxicity studies by Dr. Yinfa Ma, Curators’ Teaching Professor of chemistry at Missouri S&T, and his graduate student Qingbo Yang, suggest that when exposed to sunlight, zinc oxide, a common ingredient in sunscreens, undergoes a chemical reaction that may release unstable molecules known as free radicals. Free radicals seek to bond with other molecules, but in the process, they can damage cells or the DNA contained within those cells. This in turn could increase the risk of skin cancer.

“Zinc oxide may generate free radicals when exposed to UV (ultraviolet) sunlight,” May [sic] says, “and those free radicals can kill cells.”

Ma studied how human lung cells immersed in a solution containing nano-particles of zinc oxide react when exposed to different types of light over numerous time frames. Using a control group of cells that were not immersed in the zinc oxide solution, Ma compared the results of light exposure on the various groups of cells. He found that zinc oxide-exposed cells deteriorated more rapidly than those not immersed in the chemical compound. Even when exposed to visible light only, the lung cells suspended in zinc oxide deteriorated. But for cells exposed to ultraviolet rays, Ma found that “cell viability decreases dramatically.”

I categorized this research as mildly disturbing for a couple of reasons. (a) It’s never good to hear about lung cells deteriorating. (b) I never slather sunscreen on my lungs. (c) Why didn’t the researcher test skin cells? (d) The cells were immersed in a solution; what concentration of zinc oxide nanoparticles were present in the solution and is that the same concentration found in my sunscreen?

As the researcher notes this work is just part of a longer scientific inquiry (from the May 8, 2012 news item),

Ma’s research on zinc oxide’s effect on cells is still in the early stages, so he cautions people from drawing conclusions about the safety or dangers of sunscreen based on this preliminary research.

“More extensive study is still needed,” May says. “This is just the first step.”

For instance, Ma plans to conduct electron spin resonance tests to see whether zinc oxide truly does generate free radicals, as he suspects. In addition, clinical trials will be needed before any conclusive evidence may be drawn from his studies.

In the meantime, Ma advises sunbathers to use sunscreen and to limit their exposure to the sun.

“I still would advise people to wear sunscreen,” he says. “Sunscreen is better than no protection at all.”

I suspect that last comment is an indirect reference to a recent study (mentioned in my Feb. 9, 2012 posting) that found 13% of Australians said they weren’t using any sunscreens due to their fears about nanoparticles in those products.

At this point, nanosunscreens get a very cautious pass given the information at hand.

For anyone who’s interested in how stories about science and risk, specifically concerning nanosunscreens, can get reported, I’d advise a glance at the 2020 Science blog. (Andrew Maynard, Director of the Risk Science Center at the University of Michigan, has been writing on his 2020 blog for years and covered nanosunscreens on more than one occasion.) In his May 3, 2012 posting he recounts his experience trying to refine comments about nanosunscreens and safety as a reporter is getting his story, with quotes from Andrew, to press.

ETA Aug. 17, 2012: I’d forgotten but was recently reminded that lung cells and skin cells are the same base cell until they differentiate themselves at a later stage of development. (I’m sure scientists are silently screaming but that’s my best description of the process.) So, I better appreciate why the researchers used lung cells for their study but my comment remains, I don’t slather sunscreen on my lungs. While the results of the study are interesting, they don’t seem applicable to a real world experience.

Phyto and nano soil remediation (part 2: nano)

For Part 2, I’ve included part of my original introduction (sans the story about the neighbour’s soil and a picture of Joe Martin):

I’m pleased to repost a couple of pieces on soil remediation written by Joe Martin for the Mind the Science Gap (MTSG) blog.

I wrote about the MTSG blog in my Jan. 12, 2012 posting, which focussed on this University of Michigan project designed by Dr. Andrew Maynard for Master’s students in the university’s Public Health program. Very briefly here’s a description of Andrews and the program from the About page,

Mind the Science Gap is a science blog with a difference.  For ten weeks between January and April 2012, Masters of Public Health students from the University of Michigan will each be posting weekly articles as they learn how to translate complex science into something a broad audience can understand and appreciate.

Each week, ten students will take a recent scientific publication or emerging area of scientific interest, and write a post on it that is aimed at a non expert and non technical audience.  As the ten weeks progress, they will be encouraged to develop their own area of focus and their own style.

About the Instructor.  Andrew Maynard is Director of the University of Michigan Risk Science Center, and a Professor of Environmental Health Sciences in the School of Public Health.  He writes a regular blog on emerging technologies and societal implications at 2020science.org.

Here’s a bit more about Joe Martin,

I am a second year MPH student in Environmental Quality and Health, and after graduation from this program, I will pursue a Ph.D. in soil science.  My interests lie in soil science and chemistry, human health and how they interact, especially in regards to agricultural practice and productivity.

Here’s part 2: nano soil remediation or Joe’s Feb. 10, 2012 posting:

Last week I wrote about phytoremediation, and its potential to help us combat and undo soil contamination. But, like any good advanced society, we’re not pinning all our hopes on a single technique. A commenter, Maryse, alerted me to the existence of another promising set of techniques and technologies: nano-remediation.

For those who don’t know, nano-technology is a science which concerns itself with manipulating matter on a very small scale.  Nano-particles are commonly described as being between 100 nanometers (nm) to 1nm, though this is hardly a hard and fast rule. (For perspective, a nanometer is one one-millionth of a millimeter. If you aren’t inclined to the metric system, there are roughly four hundred million nanometers per inch.) On such micro-scales, the normal properties of compounds can be altered without changing the actual chemical composition. This allows for many new materials and products, (such as Ross Nanotechnology’s Neverwet Spray,) and for new applications for common materials, (using graphene to make the well-known carbon nanotubes).

When we apply the use of nano-scale particles to the remediation of contaminated soil, we are using nano-remediation. Unlike phytoremediation, this actually encompasses several different strategies which can be broadly classes as adsorptive or reactive. (Mueller and Nowack, 2010) The use of iron oxides to adsorb and immobilize metals and arsenic is not a new concept, but nano-particles offer new advantages. When I wrote “adsorb”, I was not making a spelling error; adsorption is a process by which particles adhere to the surface of another material, but do not penetrate into the interior. This makes surface area, not volume, the important characteristic. Nano-particles provide the maximum surface area-to-weight ratio, maximizing the adsorptive surfaces onto which these elements can attach. These adsorptive processes a very effective at binding and immobilizing metals and arsenic, but they do not allow for the removal of the toxic components. This may be less-than-ideal, but in places like Bangladesh, where arsenic contamination of groundwater poses major health risks, it may be just short of a miracle.

Reactive nano-remediation strategies focus on organic pollutants, and seem to work best for chlorinated solvents such as the infamous PCBs. Nano-scale zero valent iron, or nZVI, is the most widely explored and tested element used in these methods. The nZVI, or sometimes nZVI bound to various organic molecules like polysaccharides or protein chains, force redox reactions which rapidly disassemble the offending molecules.

There are other advantages to these nano-molecular techniques aside from the efficiency with which they bind or destroy the offending pollutants. In reactive remediation, the hyper reactivity nZVI causes it to react with other common and natural elements, such as dissolved oxygen in ground water, or nitrate and sulfate molecules, and in the process this inactivates the nZVI. While this forces multiple applications of the nano-particle (delivered in slurry form, through an injection well), it also prevents unused iron from drifting out of the treatment zone and becoming a pollutant itself. For adsorptive and reactive remediation techniques, that active nano-particles are injected into a well dug into or near the contaminated soil and/or groundwater. When injected as a slurry, the nano-particles can drift along with the flow of ground water, effectively creating an “anti-pollution” plume. In other formulations, the active mixture is made to flow less easily, effectively creating a barrier to filter spreading pollution or through which polluted ground water can be pulled.

There are health risks and concerns associated with the production and use of nano-particles, so some caution and validation is needed before its used everywhere. However, there has already been some successes with nano-remediation. The example of PCB remediation with nZVI is taken from great success the US Air Force has had. (PCB contamination is a legacy of their use as fire-suppressants). Beyond this, while nano-remediation has not been widely applied on surface or near-surface soils, it does enable remediation in deeper soils normally only accessed by “pump-and-treat” methods, (which are expensive and can have decades-long time frames). When coupled with other techniques, (like phytoremediation), it does fit nicely into an expanding tool bag, one with which we as a society and species can use to reverse our impact on the planet, (and our own health).

Further Reading: There was no way for me to represent the full sum of nano-remediation, nevertheless nanotechnology, in this post. It has such potential, and is developing at such a rate that the attention it deserves is better measured in blogs (or perhaps decablogs). So if you are interested in nano-technology or nano-remediation, click through some of the links below.

List of popular blogs: http://www.blogs.com/topten/10-popular-nanotechnology-blogs/, including some very important ones on the health risks of nano-technology.

A cool site listing sites currently using nano-remediation: http://www.nanotechproject.org/inventories/remediation_map/, and another post from the same site dealing with nano-remediation [PEN webcast on site remediation]: http://www.nanotechproject.org/events/archive/remediation/

An excellent peer-reviewed article: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2799454/

Citation: Mueller C and Nowack B. Nanoparticles for Remediation: Solving Big Problems with Little Particles. 2010. Elements, Vol. 6. pp 395-400.

You can read about the other MTSG contributors and find links to their work here.

I have mentioned remediation before on the blog,

soil remediation and Professor Dennis Carroll at the University of Western Ontario in my Nov. 4, 2011 posting

remediation and a patent for Green-nano zero valent iron (G-nZVI) in my June 17, 2011 posting

groundwater remediation and nano zero valent iron (nZVI) at the University of California at Santa Barbara in my March 30, 2011 posting

site remediation and drywall in my Aug. 2, 2010 posting

remediation technologies and oil spills my May 6, 2010 posting

my March 4, 2010 posting  (scroll down about 1/2 way) which is a commentary on the Project for Emerging Nanotechnologies (PEN) webcast about site remediation in Joe’s list of resources

Thank you Joe for giving me permission to repost your pieces. For more of Joe’s pieces,  Read his posts here –>

Phyto and nano soil remediation (part 1: phyto/plant)

One of my parent’s neighbours was a lifelong vegetarian and organic gardener. The neighbour, a Dutchman,  had been born on the island of Curaçao, around 1900, and was gardening organically by the 1940’s at the latest. He had wonderful soil and an extraordinary rose garden in the front yard and vegetables in the back, along with his compost heap. After he died in the 1980’s, his granddaughter sold the property to a couple who immediately removed the roses to be replaced with grass in the front and laid a good quantity of cement in the backyard. Those philistines sold the soil and, I imagine, the roses too.

Myself, I’m not not a gardener but I have a strong appreciation for the necessity of good soil so, I’m pleased to repost a couple of pieces on soil remediation written by Joe Martin for the Mind the Science Gap (MTSG) blog. First here’s a little bit about the MTSG blog project and about Joe Martin.

I wrote about the MTSG blog in my Jan. 12, 2012 posting, which focussed on this University of Michigan project designed by Dr. Andrew Maynard for Masters students in the university’s Public Health program. Very briefly here’s a description of Andrews and the program from the About page,

Mind the Science Gap is a science blog with a difference.  For ten weeks between January and April 2012, Masters of Public Health students from the University of Michigan will each be posting weekly articles as they learn how to translate complex science into something a broad audience can understand and appreciate.

Each week, ten students will take a recent scientific publication or emerging area of scientific interest, and write a post on it that is aimed at a non expert and non technical audience.  As the ten weeks progress, they will be encouraged to develop their own area of focus and their own style.

About the Instructor.  Andrew Maynard is Director of the University of Michigan Risk Science Center, and a Professor of Environmental Health Sciences in the School of Public Health.  He writes a regular blog on emerging technologies and societal implications at 2020science.org.

As for Joe Martin,

I am a second year MPH student in Environmental Quality and Health, and after graduation from this program, I will pursue a Ph.D. in soil science.  My interests lie in soil science and chemistry, human health and how they interact, especially in regards to agricultural practice and productivity.

Here’s a picture,

Joe Martin, Masters of Public Health program, University of Michigan, MTSG blog

Joe gave an excellent description of nano soil remediation but I felt it would be remiss to not include the first part on phyto soil remediation. Here’s his Feb. 3, 2012 posting about plants and soil remediation:

Pictured: The Transcendent Reality of Life and the Universe.

Plants are awesome. It’s from them that we get most of our food. It’s from plants that many of our medicines originated, (such as Willow and aspirin). We raise the skeletons of our homes and furnish their interiors with trees. Most of our cloth is woven from plant fiber, (a statement I feel comfortable making based solely on the sheer weight of denim consumed each year in this country.) And although there is an entire world of water plants, all of the plants I listed above are grown in the soil*. How the individual soil particles cling to each other, how they hold water and nutrients, and how the soil provides shelter for the various macro and micro-organisms is as important to the growth of plants as sunlight.

But no matter how proliferative, no matter how adaptive plants are, there are still spaces inaccessible to them. A clear example would be the Saharan dunes or a frozen tundra plain. However, many of places where plants can’t survive are created by human activity. The exhaust of smelters provides one example – waste or escaped zinc, copper, cadmium, and lead infiltrate downwind soils and often exterminate many or most of the natural plants. Normal treatment options for remediating metal contaminated soils are expensive, and can actually create hazards to human health. This is because, like some persistent organic pollutants (the infamous dioxin is a great example), the natural removal of metals from soils often proceeds very slowly, if it proceeds at all. For this reason, remediation of metal soil often involves scraping the contaminated portion off and depositing it in a hazardous waste landfill. In cases of old or extensive pollution, the amount of soil can exceed thousands of cubic feet. In this process, contaminated dust can easily be stirred up, priming it to be inhaled by either the workers present or any local populations.

But it can be cousins of the evicted shrubs and grass which offer us the best option to undo the heavy metal pollution. In a process called phytoremediation, specific plants are deliberately seeded over the contaminated areas. These plants have been specifically chosen for the tendency to uptake the metals in question. (In some cases, this process is also used for persistent organic pollutants, like 2,3,7,8-TCDD, infamously known as dioxin.) These plants are allowed to grow and develop their root systems, but are also selectively mowed to remove the pollutant laden leaves and stems, and ultimately remove the contaminant from the soil system. Once the pollution level has descended to a sufficiently low level, the field may be left fallow. Otherwise, the remediating plants can be removed and the ground reseeded with natural plants or returned to agricultural, commercial, or residential use.

When it is applicable, phytoremediation offers a significant advantage over either restricted access, (a common strategy which amounts to placing a fence around the contaminated site and keeping people out), or soil removal. While the polluted grass clippings much still be treated as hazardous waste, the volume and mass of the hazardous material is greatly reduced. Throughout the process, the remediating plants also serve to fix the soil in place, reducing or preventing runoff and free-blowing dust. Instead of bulldozers and many dump trucks, the equipment needed is reduced to a mower which captures grass or plant clippings and a single dump truck haul each growing season. Finally, the site does not need to be reinforced with topsoil from some other region to return it to useable space. These last few advantages can also greatly reduce the cost of remediation.

The major disadvantages of phytoremediation are time and complexity. Scraping the soil can be done in a few months or less, depending on the size of the area to be remediated. Phytoremediation takes multiple growing seasons, and if the land is a prime space for development this may be unacceptable. Phytoremediation requires different plants for different pollutants or mixtures of pollutants. I chose the copper, zinc, lead, and cadmium mixture earlier in the article because in a study from 2005, (Herrero et al, 2005), they specifically attempted to measure the ability of rapeseed and sunflower to extract these metals from an artificially contaminated soil. The unfortunate reality is that each contaminant will have to be studied in such a way, meticulously pairing pollutants (or mixture of them) with a plant. Each of the selected plants must also be able to grow in the soil to be remediated. Regardless of type of contamination, a North American prairie grass is unlikely to grow well in a Brazilian tropical soil. For these reasons, phytoremediation plans must be individually built for each site. This is costly both in dollars and man hours. Furthermore, there is always the problem that some pollutants don’t respond well to phytoremediation. While copper, zinc, and cadmium have all been found to respond quite well to phytoremediation, lead does not appear to be. In the Herraro et al study, the plants accumulated lead, but did so in the roots. Unless the roots were dug up, this would not effectively remove the lead from the soil system. Unfortunately, lead is one of the most common heavy metal pollutants, at least in the U.S., a legacy of our former love for leaded gasoline and paint.

Despite these disadvantages, phytoremediation presents a unique opportunity to remove many pollutants. It is by far the least environmentally destructive, and in many cases may be the cheapest method of remediation. I am happy to see that it appears to be receiving funding and is being actively researched and developed, (for those who don’t pursue the reference, the Herraro article came from The International Journal of Phytoremediation.) In recent times, we’ve been hit with messages about expanding hydrofracking and the Gulf Oil spill, but perhaps I can send you into this weekend with a little positivity about our environmental future. The aggregated techniques and methods which can be termed “phytoremediation” have the potential to do much good at a lower cost than many other remediation techniques. That sounds like a win-win situation to me.

* I am aware that many of these crops can be grown aero- or hydroponically. While these systems do provide many foodstuffs, they are not near the level of soil grown crops, and can be comparatively very expensive. I chose not to discuss them because, well, I aspire to be a soil scientist.

1.) Herreo E, Lopez-Gonzalvez A, Ruiz M, Lucas Garcia J, and Barbas C. Uptake and Distribution of Zinc, Cadmium, Lead, and Copper in Brassica napus vr. oleifera and Helianthus annus Grown in Contaminated Soils. 2005. The International Journal of Phytoremediation. Vol. 5, pp. 153-167.

A note on photos: Any photos I use will be CC licensed. These particular photos are provided by Matthew Saunders (banana flower) and KPC (rapeseed) under an attribution, no commercial, no derivation license.  I originally attempted to link to the source in the caption, but wordpress won’t let me for some reason. Until I work that out, the image home can be found under the artist’s names a few sentences earlier. I believe this honors the license and gives proper credit, but if I’ve committed some faux pas, (which would not be a surprise), don’t hesitate to comment and correct me. And thanks to those who have done so in previous posts, its one of the best ways to learn.

Part 2: nano soil remediation follows.

For more of Joe’s pieces,  Read his posts here –>

Davos, World Economic Forum, and risk

The World Econ0mic Forum’s (WEF) annual meeting in Davos, Switzerland started today, Jan. 25. 2012 and runs until Jan. 29. From the WEF’s home page, here’s what they have to say about the theme for this year’s meeting,

The contextual change at the top of minds remains the rebalancing and deleveraging that is reshaping the global economy. In the near term, this transformation is seen in the context of how developed countries will deleverage without falling back into recession and how emerging countries will curb inflation and avoid future economic bubbles. In the long term, both will play out as the population of our interdependent world not only passes 7 billion but is also interconnected through information technology on a historic scale. The net result will be transformational changes in social values, resource needs and technological advances as never before. In either context, the necessary conceptual models do not exist from which to develop a systemic understanding of the great transformations taking place now and in the future.

It is hubris to frame this transition as a global “management” problem of integrating people, systems and technologies. It is an indisputable leadership challenge that ultimately requires new models, bold ideas and personal courage to ensure that this century improves the human condition rather than capping its potential. Thus, the Annual Meeting 2012 will convene under the theme, The Great Transformation: Shaping New Models, whereby leaders return to their core purpose of defining what the future should look like, aligning stakeholders around that vision and inspiring their institutions to realize that vision.

The meeting is a big deal with lots of important and/or prominent people expected to attend. I usually get my dose of WEF’s annual meeting (sometimes there’s some talk about nanotechnology) from Dr. Andrew Maynard, Director of the University of Michigan Risk Science Center and owner of the 2020 Science blog. I’m not sure if he’s attending this year but he has already profiled the WEF Global Risks 2012 Report in a Jan. 11, 2012 posting on his blog.

The World Economic Forum Global Risks Report is one of the most authoritative annual assessments of emerging issues surrounding risk currently produced. Now in its seventh edition, the 2012 report launched today draws on over 460 experts* from industry, government, academia and civil society to provide insight into 50 global risks across five categories, within a ten-year forward looking window.

As you would expect from such a major undertaking, the report has its limitations. There are some risk trends that maybe aren’t captured as well as they could be – chronic disease and pandemics are further down the list this year than I would have expected. And there are others that capture the headlining concerns of the moment – severe income disparity is the top-listed global risk in terms of likelihood.

Risks are addressed in five broad categories, covering economic, environmental, geopolitical, societal and technological risks. And cutting across these, the report considers three top-level issues under the headings Seeds of Dystopia (action or inaction that leads to fragility in states); How Safe are our Safeguards? (unintended consequences of over, under and unresponsive regulation); and The Dark Side of Connectivity(connectivity-induced vulnerability). These provide a strong framework for approaching the identified risks systemically, and teasing apart complex interactions that could lead to adverse consequences.

I’m always interested in ‘unintended consequences’. (When I worked as a frontline staff member for various bureaucracies, I was able to observe the ‘unintended consequences’ of policies devised by people who had no direct experience or had forgotten their experience.) So, I was quite interested to note these items in Andrew’s excerpts from the report,

Unintended consequences of nanotechnology. Following a trend seen in previous Global Risks reports, the unintended consequences of nanotechnology – while still flagged up – are toward the bottom of the risk spectrum. The potential toxicity of engineered nanomaterials is still mentioned as a concern. But most of the 50 risks addressed are rated as having a higher likelihood and/or impact.

Unintended consequences of new life science technologies. These are also relatively low on the list, but higher up the scale of concern that nanotechnologies. Specifically called out are the possibilities of genetic manipulation through synthetic biology leading to unintended consequences or biological weapons.

Unforeseen consequences of regulation. These are ranked relatively low in terms of likelihood and impact. But the broad significance of unintended consequences is highlighted in the report. These are also linked in with the potential impact and likelihood of global governance failure. Specifically, the report calls for

“A shift in mentality … so that policies, regulations or institutions can offer vital protection in a more agile and cohesive way.”

The report’s authors also ask how leaders can develop anticipatory and holistic approaches to system safeguards; how businesses and governments can prevent a breakdown of trust following the emergence of new risks; and how governments, business and civil society can work together to improve resilience against unforeseen risks.

Andrew has a lot more detail about the risks noted in the report, so I encourage you to read the post in its entirety. I was intrigued by this final passage with its emphasis on communication and trust,

The bottom line? The report concludes that

Decision-makers need to improve understanding of incentives that will improve collaboration in response to global risks;

Trust, or lack of trust, is perceived to be a crucial factor in how risks may manifest themselves. In particular, this refers to confidence, or lack thereof, in leaders, in systems which ensure public safety and in the tools of communication that are revolutionizing how we share and digest information; and

Communication and information sharing on risks must be improved by introducing greater transparency about uncertainty and conveying it to the public in a meaningful way.

One other comment, Andrew notes that he was ‘marginally involved’ (single quotes mine) in the report as a member of the World Economic Forum Agenda Council on Emerging Technologies.

Mind the Science Gap and mentoring

There’s a bunch of master’s of public health students at the University of Michigan who want to communicate about complex science to the public and you’re invited. Mind the Science Gap blog is a project of Dr. Andrew Maynard’s. The project is being presented as part of a course. Here’s a description of the course for the students (from the Syllabus webpage),

This course is designed to teach participants how to connect effectively with a non-expert audience when conveying complex science-based information that is relevant to public health, using the medium of a public science blog (http://mtsg.org).

In today’s data-rich and hyper-connected world, the gap between access to information and informed decision-making is widening.  It is a gap that threatens to undermine actions on public health as managers, policy makers, consumers and others struggle to fish relevant information from an ever-growing sea of noise.  And it is a gap that is flourishing in a world where anyone with a smart phone and an Internet connection can become an instant “expert”.

To bridge this gap, the next generation of public health professionals will need to be adept at working with new communication platforms, and skilled at translating “information” into “intelligence” for a broad audience. These skills will become increasingly relevant to communicating effectively with managers, clients and customers.  But more broadly, they will be critical to supporting evidence-informed decisions as social influences continue to guide public health activities within society.

Here’s a bit more about the blog itself and what the students will be doing (from the About page),

Mind the Science Gap is a science blog with a difference.  For ten weeks between January and April 2012, Masters of Public Health students from the University of Michigan will each be posting weekly articles as they learn how to translate complex science into something a broad audience can understand and appreciate.

Each week, ten students will take a recent scientific publication or emerging area of scientific interest, and write a post on it that is aimed at a non expert and non technical audience.  As the ten weeks progress, they will be encouraged to develop their own area of focus and their own style.

And they will be evaluated in the most brutal way possible – by the audience they are writing for!  As this is a public initiative, comments and critiques on each post will be encouraged, and author responses expected.

This is not a course on science blogging.  Rather, it is about teaching public health graduate students how to convey complex information effectively to a non-expert audience, using the medium of a science blog.

The blogging starts Jan. 16, 2012 and you are invited to participate.  You can be a casual commenter or Andrew has a list of almost 40 mentors (people who’ve committed to commenting on the content at least once per week) and he’s asking for more. BTW, I (Maryse de la Giroday) am on the list as is Robyn Sussel, health and academic communicator and principal for Signals, a Vancouver-based communications and graphic design company. If you’re interested in signing up as a mentor, you can contact Andrew through this email address: maynarda@umich.edu

You can also sign up for RSS feeds.

Dr. Andrew Maynard discusses the Health Canada nanomaterial definition

I have often referred to and linked to Andrew Maynard’s writing on nanotechnology issues and am pleased to note he has kindly answered some questions about the Health Canada Working Definition of Nanomaterial. Before launching into his responses, here’s a little more about him.

Dr. Andrew Maynard was originally trained as a physicist and graduated with a PhD from Cambridge, UK  in 1993. He worked for a number of years for the UK Health and Safety Executive moving to the US to work with the National Institute of Occupational Health and Safety where he helped set up a nanotechnology safety programme post 2000 when the NNI was established. By 2005, he was employed at the Project on Emerging Nanotechnologies as their Chief Science Advisor. As of April 2010, he assumed responsibility as director of the Risk Science Center at the University of Michigan School of Public Health. He consults internationally on nanotechnology safety issues. He was a member of the expert panel consulted for the nanotechnology report, Small is Different; A Science Perspective on the Regulatory Challenges of Nanotechnology, published by the Council of Canadian Academies in 2008.

Since the 2008 report for the Council of Canadian Academies, Andrew has adopted a different approach to regulating nanotechnology, a change I first noted in an April 15, 2011 posting on the University of Michigan Risk Science Center blog. Excerpted from that posting,

Engineered nanomaterials present regulators with a conundrum – there is a gut feeling that these materials present a new regulatory challenge, yet the nature and resolution of this challenge remains elusive.  But as the debate over the regulation of nanomaterials continues, there are worrying signs that discussions are being driven less by the science of how these materials might cause harm, and more by the politics of confusion and uncertainty.

The genesis of the current dilemma is entirely understandable. Engineered nanomaterials are typically the product of nanotechnology – a technology that has been lauded as leading to designed materials with unique physical and chemical properties.   Intuitively it makes sense that these unique properties could lead to unique risks.  And indeed a rapidly growing body of research is indicating that many nanoscale materials behave differently to their non-nanoscale counterparts in biological environments. Logically, it seems to follow that engineered nanomaterials potentially present risks that depend on their scale, and should be regulated appropriately.

Yet the more we learn about how materials interact with biology, the less clear it becomes where the boundaries of this class of materials called “nanomaterials” lie, or even whether this is a legitimate class of material at all from a regulatory perspective.

I waffle somewhat largely due to my respect for Andrew and his work and due to my belief that one needs to entertain new approaches for the emerging technologies, even when they make your brain hurt. (Before proceeding with Andrew’s comments and for anyone who’s interested in my take here is, My thoughts on the Health Canada nanomaterial definition.)

In any event, here are Andrew’s responses to my questions,

  • I have warm feelings towards this definition, especially the elaboration where I think they avoided the problem of including naturally occuring nanoparticles (as per your comment about micelles in milk); and they specify a size range without being doctrinaire about it. How do you feel about it, given that you’re not in favour of definitions?

The problem is that, while the Health Canada is a valiant attempt to craft a definition based on the current state of science, it is still based on a premise – that size within a well defined range is a robust indicator of novel risk – that is questionable.  Granted, they try to compensate for the limitations of this premise, but the result still smacks of trying to shoehorn the science into an assumption of what is important.

  • Do you see any pitfalls?

A large part of the problem here is an attempt to oversimplify a complex problem, without having a clear understanding of what the problem is in the first place.  Much of my current thinking – including questioning current approaches to developing definitions – revolves round trying to work out what the problem is before developing the solution.  But this makes commenting on the adequacy or inadequacy of definitions tricky, to say the least.

  • Is there anything you’d like to add?

My sincere apologies, I’ve just got to 5:00 PM on Sunday [Oct. 23, 2011] after working flat out all weekend, and am not sure I have the wherewithal to tackle this before collapsing in a heap.

I am hugely thankful that Dr. Maynard extended himself to answer my questions about the Health Canada definition of nanomaterial. To Andrew: a virtual bouquet of thanks made up of the most stunning flowers and scents you can imagine.

More on US National Nanotechnology Initiative (NNI) and EHS research strategy

In my Oct, 18, 2011 posting I noted that the US National Nanotechnology Initiative (NNI) would be holding a webinar on Oct. 20, 2011 to announce an environmental, health, and safety (EHS) research strategy for federal agencies participating in the NNI. I also noted that I was unable to register for the event. Thankfully all is not lost. There are a couple of news items on Nanowerk which give some information about the research strategy. The first news item, U.S. government releases environmental, health, and safety research strategy for nanotechnology, from the NNI offers this,

The strategy identifies six core categories of research that together can contribute to the responsible development of nanotechnology: (1) Nanomaterial Measurement Infrastructure, (2) Human Exposure Assessment, (3) Human Health, (4) Environment, (5) Risk Assessment and Risk Management, and (6) Informatics and Modeling. The strategy also aims to address the various ethical, legal, and societal implications of this emerging technology. Notable elements of the 2011 NNI EHS Research Strategy include:

  • The critical role of informatics and predictive modeling in organizing the expanding nanotechnology EHS knowledge base;
  • Targeting and accelerating research through the prioritization of nanomaterials for research; the establishment of standardized measurements, terminology, and nomenclature; and the stratification of knowledge for different applications of risk assessment; and
  • Identification of best practices for the coordination and implementation of NNI interagency collaborations and industrial and international partnerships. “The EHS Research Strategy provides guidance to all the Federal agencies that have been producing gold-standard scientific data for risk assessment and management, regulatory decision making, product use, research planning, and public outreach,” said Dr. Sally Tinkle, NNI EHS Coordinator and Deputy Director of the National Nanotechnology Coordination Office (NNCO), which coordinates activities of the 25 agencies that participate in the NNI. “This continues a trend in this Administration of increasing support for nanotechnology-related EHS research, as exemplified by new funding in 2011 from the Food and Drug Administration and the Consumer Product Safety Commission and increased funding from both the Environmental Protection Agency and the National Institute of Occupational Safety and Health within the Centers for Disease Control and Prevention.”

The other news item, Responsible development of nanotechnology: Maximizing results while minimizing risk, from Sally Tinkle, Deputy Director of the National Nanotechnology Coordination Office and Tof Carim, Assistant Director for Nanotechnology at OSTP (White House Office of Science and Technology Policy) adds this,

Core research areas addressed in the 2011 strategy include: nanomaterial measurement, human exposure assessment, human health, environment, risk assessment and management, and the new core area of predictive modeling and informatics. Also emphasized in this strategy is a more robust risk assessment component that incorporates product life cycle analysis and ethical, legal, and societal implications of nanotechnology. Most importantly, the strategy introduces principles for targeting and accelerating nanotechnology EHS research so that risk assessment and risk management decisions are based on sound science.

Progress in EHS research is occurring on many fronts as the NNI EHS research agencies have joined together to plan and fund research programs in core areas. For example, the Food and Drug Administration and National Institutes of Health have researched the safety of nanomaterials used in skin products like sunscreen; the Environmental Protection Agency and Consumer Product Safety Commission are monitoring the health and environmental impacts of products containing silver nanoparticles, and National Institute of Occupational Safety and Health has recommended safe handling guidelines for workers in industries and laboratories.

Erwin Gianchandani of the Computing Community Consortium blog focuses, not unnaturally, on the data aspect of the research strategy in his Oct. 20, 2011 posting titled, New Nanotechnology Strategy Touts Big Data, Modeling,

From the EHS Research Strategy:

Expanding informatics capabilities will aid development, analysis, organization, archiving, sharing, and use of data that is acquired in nanoEHS research projects… Effective management of reliable, high-quality data will also help support advanced modeling and simulation capabilities in support of future nanoEHS R&D and nanotechnology-related risk management.

Research needs highlighted span “Big Data”…

Data acquisition: Improvements in data reliability and reproducibility can be effected quickly by leveraging the widespread use of wireless and video-enabled devices by the public and by standards development organizations to capture protocol detail through videos…

Data analysis: The need for sensitivity analysis in conjunction with error and uncertainty analysis is urgent for hazard and exposure estimation and the rational design of nanomaterials… Collaborative efforts in nanomaterial design [will include] curation of datasets with known uncertainties and errors, the use of sensitivity analysis to predict changes in nanomaterial properties, and the development of computational models to augment and elucidate experimental data.

Data sharing: Improved data sharing is a crucial need to accelerate progress in nanoscience by removing the barriers presented by the current “siloed” data environment. Because data must be curated by those who have the most intimate knowledge of how it was obtained and analyzed and how it will be used, a central repository to facilitate sharing is not an optimal solution. However, federating database systems through common data elements would permit rapid semantic search and transparent sharing over all associated databases, while leaving control and curation of the data in the hands of the experts. The use of nanomaterial ontologies to define those data elements together with their computer-readable logical relationships can provide a semantic search capability.

…and predictive modeling:

Predictive models and simulations: The turnaround times for the development and validation of predictive models is measured in years. Pilot websites, applications, and tools should be added to the NCN [Network for Computational Nanotechnology] to speed collaborative code development among relevant modeling and simulation disciplines, including the risk modeling community. The infrastructure should provide for collaborative code development by public and private scientists, code validation exercises, feedback through interested user communities, and the transfer of validated versions to centers such as NanoHUB… Collaborative efforts could supplement nanomaterial characterization measurements to provide more complete sensitivity information and structure-property relationships.

Gianchandani’s post provides an unusual insight into the importance of data where research is considered. I do recommend more of his posting.

Dr. Andrew Maynard on his 2020 Science blog has posted as of Oct. 20, 2011 with a comparison of the original draft to the final report,

Given the comments received, I was interested to see how much they had influenced the final strategy.  If you take the time to comment on a federal document, it’s always nice to know that someone has paid attention.  Unfortunately, it isn’t usual practice for the federal government to respond directly to public comments, so I had the arduous task of carrying out a side by side comparison of the draft, and today’s document.

As it turns out, there are extremely few differences between the draft and the final strategy, and even fewer of these alter the substance of the document.  Which means that, by on large, my assessment of the document at the beginning of the year still stands.

Perhaps the most significant changes were on chapter 6 – Risk Assessment and Risk Management Methods. The final strategy presents a substantially revised set of current research needs, that more accurately and appropriately (in my opinion) reflect the current state of knowledge and uncertainty (page 66).  This is accompanied by an updated analysis of current projects (page 73), and additional text on page 77 stating

“Risk communication should also be appropriately tailored to the targeted audience. As a result, different approaches may be used to communicate risk(s) by Federal and state agencies, academia, and industry stakeholders with the goal of fostering the development of an effective risk management framework.”

Andrew examines the document further,

Comparing the final strategy to public comments from Günter Oberdörster [professor of Environmental Medicine at the University of Rochester in NY state] on the draft document. I decided to do this as Günter provided some of the most specific public comments, and because he is one of the most respected experts in the field.  The specificity of his comments also provided an indication of the extent to which they had been directly addressed in the final strategy.

Andrew’s post is well worth reading especially if you’ve ever made a submission to a public consultation held by your government.

The research strategy and other associated documents are now available for access and the webinar will be available for viewing at a later date. Go here.

Aside, I was a little surprised that I was unable to register to view the webinar live (I wonder if I’ll encounter the same difficulties later). It’s the first time I’ve had a problem viewing any such event hosted by a US government agency.