A Danish researcher has published work which suggests that while most scientists believe in conducting science research responsibly, they have multiple and, at times, conflicting definitions for what they mean by responsible. From an April 8, 2014 news item on phys.org,
Responsible research has been put firmly on the political agenda with, for instance, EU’s Horizon 2020 programme in which all research projects must show how they contribute responsibly to society. New research from the University of Copenhagen reveals that the scientists themselves place great emphasis on behaving responsibly; they just disagree on what social responsibility in science entails. Responsibility is, in other words, a matter of perspective.
An April 8, 2014 University of Copenhagen news release (also on EurekAlert), which originated the news item, provides some insight from the lead researcher, Maja Horst (Note: Links have been removed),
“We have, on the one hand, scientists who are convinced that they should be left alone in their ivory tower and that neither politicians nor the general public should interfere with their research activities. In their eyes, the key to conducting responsible science is to protect it from external interest because that will introduce harmful biases. Science should therefore be completely independent and self-regulated in order to be responsible,” says communication researcher Maja Horst from the University of Copenhagen. She continues:
“But, on the other hand, there are scientists who believe that the ivory tower should have an open door so that politicians, publics and industry can take part in the development of science. Such engagement is seen as the only way to ensure that science develops in accordance with the needs and values of society, and thereby fulfils its social responsibilities.”
In collaboration with PhD student Cecilie Glerup from Copenhagen Business School, Maja Horst has analysed more than 250 scientific journal articles all concerned with the role of research in society and particularly the notion of responsibility. The results of their analyses have just been published in the Journal of Responsible Innovation.
“We can conclude that all the scientists are deeply concerned that their research is responsible and useful to society; they just disagree about what it means to conduct responsible research – how transparent the ivory tower should be, if you like. This is a problem because if we have different definitions of what it means to be a responsible scientist, it becomes very difficult to have a fruitful discussion about it. It also makes it very difficult to be specific about how we want scientists to act in order to be responsible.”
Researcher Horst goes on to cite the GMO (genetically modified organisms) debate as an example of why scientists should be more ‘responsible’ and communicative (from the news release),
According to Maja Horst, discussions about responsible research are particularly important in light of the fact that entire research areas may be at risk if they are perceived to be irresponsible or controversial.
“Scientists within stem cell research, nanotechnology, or synthetic biology, which is about designing biological organisms, pay a great deal of attention to the way in which their research area is presented in the media and perceived by the public. They all saw how the GMO debate – about research into genetically modified organisms – ended in a deadlock that had serious consequences for robust research projects which simply could not attract funding. No one wanted to be associated with GMO research after the heated debates”, explains Maja Horst and adds:
“With a more balanced discussion of the role and responsibilities of scientists and society, we might have avoided the extremely rigid positions, for or against GMO, which dominated the debate.”
Horst’s last quote suggests that somehow more discussion will help us avoid controversy and rigid positions vis à vis science. In 2009, I wrote a series of posts about public engagement/discussion and my sense that this process is often considered a prophylactic treatment [January 14, 2009: Public understanding/engagement of nanotechnology in Canada?; January 15, 2009: Public understanding of science projects as prophylactic treatments; January 16, 2009: The unpredictability of ‘frankenfoods’]; and January 19, 2009: ‘Magic nano’ and whistling in the dark] to help scientists and policymakers avoid controversy. Quick summary: while I strongly support outreach and discussion, I don’t believe these activities will help us avoid all controversy or more to the point, public panics such as the one in the UK and Europe concerning GMO. While Canadian scientists felt a ‘chill’ regarding GMO research and stem cell research (there was controversy and public panic particularly in the US), the public discourse in Canada never reached the heights of public distress suffered elsewhere.
A colleague (Mairi Welman, District of North Vancouver*) recently (April 7, 2014) sent me a link to an article relevant to this discussion about controversy and science. It’s an interview with Dan Kahan about his work at the Yale School of Law’s Cultural Cognition Project (mentioned somewhere in my 2009 posts) where they examined the means by which people develop their attitudes to emerging science and technology. Interestingly, giving people more information doesn’t necessarily have an impact on their positions once they are assumed. From the April 6, 2014 article by Ezra Klein for Vox.com (this is very much concerned with US politics and the public discourse in Washington, DC but many of the ideas seem applicable elsewhere),
There’s a simple theory underlying much of American politics. It sits hopefully at the base of almost every speech, every op-ed, every article, and every panel discussion. It courses through the Constitution and is a constant in President Obama’s most stirring addresses. It’s what we might call the More Information Hypothesis: the belief that many of our most bitter political battles are mere misunderstandings. The cause of these misunderstandings? Too little information — be it about climate change, or taxes, or Iraq, or the budget deficit. If only the citizenry were more informed, the thinking goes, then there wouldn’t be all this fighting.
But the More Information Hypothesis isn’t just wrong. It’s backwards. Cutting-edge research shows that the more information partisans get, the deeper their disagreements become.
In April and May of 2013, Yale Law professor Dan Kahan — working with coauthors Ellen Peters, Erica Cantrell Dawson, and Paul Slovic — set out to test a question that continuously puzzles scientists: why isn’t good evidence more effective in resolving political debates? For instance, why doesn’t the mounting proof that climate change is a real threat persuade more skeptics?
The leading theory, Kahan and his coauthors wrote, is the Science Comprehension Thesis, which says the problem is that the public doesn’t know enough about science to judge the debate. It’s a version of the More Information Hypothesis: a smarter, better educated citizenry wouldn’t have all these problems reading the science and accepting its clear conclusion on climate change.
But Kahan and his team had an alternative hypothesis. Perhaps people aren’t held back by a lack of knowledge. After all, they don’t typically doubt the findings of oceanographers or the existence of other galaxies. Perhaps there are some kinds of debates where people don’t want to find the right answer so much as they want to win the argument. Perhaps humans reason for purposes other than finding the truth — purposes like increasing their standing in their community, or ensuring they don’t piss off the leaders of their tribe. If this hypothesis proved true, then a smarter, better-educated citizenry wouldn’t put an end to these disagreements. It would just mean the participants are better equipped to argue for their own side.
But Kahan and his coauthors also drafted a politicized version of the problem. This version used the same numbers as the skin-cream question, but instead of being about skin creams, the narrative set-up focused on a proposal to ban people from carrying concealed handguns in public. The 2×2 box now compared crime data in the cities that banned handguns against crime data in the cities that didn’t. In some cases, the numbers, properly calculated, showed that the ban had worked to cut crime. In others, the numbers showed it had failed.
Presented with this problem a funny thing happened: how good subjects were at math stopped predicting how well they did on the test. Now it was ideology that drove the answers. Liberals were extremely good at solving the problem when doing so proved that gun-control legislation reduced crime. But when presented with the version of the problem that suggested gun control had failed, their math skills stopped mattering. They tended to get the problem wrong no matter how good they were at math. Conservatives exhibited the same pattern — just in reverse.
Being better at math didn’t just fail to help partisans converge on the right answer. It actually drove them further apart. Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology. The smarter the person is, the dumber politics can make them.
Fascinating, non? It gets better as Klein pursues Kahan’s (and his colleagues’) hypotheses (there’s more in the article as Klein develops the thinking) to a logical end. (Neither challenges the notion of scientific expertise and whether or not scientists might be inclined to ‘misunderstand’ facts.) This article is an excellent introduction to Kahan’s work which he has kept refining since I first stumbled across it in 2007.
The thinking in terms of rigid positions and how we read science and deal with facts according to our political perspectives is quite helpful. However, there’s another aspect (in addition to scientific expertise) that neither Klein nor Kahan address, how does any change occur? For example, if you look at the history of electricity, you’ll find it was hugely controversial. The end of the world was predicted. amongst other things, and I’m sure that advocates for either position (bad electricity/good electricity) processed information and facts in the same fashion as we do about our own controversial science.
In sum, I think Horst has pointed out a very interesting issue with the concept of ‘scientific responsibility’ and along with Kahan’s hypotheses about how we process information and why we might want to willfully blind ourselves to facts helps us to get a step closer to living more thoughtfully but perhaps not less controversially than before.
Here’s a link to and a citation for Horst’s paper,
Mapping ‘social responsibility’ in science by Cecilie Glerup & Maja Horst. Journal of Responsible Innovation Volume 1, Issue 1, 2014, pages 31-50 DOI: 10.1080/23299460.2014.882077 Published online: 17 Feb 2014
This is an open access paper. (The journal of Responsible Innovation was mentioned here in an Oct. 31, 2013 posting when it was announced; Horst is on the editorial board.)
*ETA April 16, 2014: After getting permission, I included my colleague’s name later on April 10, 2014 when this was first published.