I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),
Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.
Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.
I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],
Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research ,  but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century . Since then, the competitiveness and precariousness of scientific careers have increased , and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity , and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions . Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals  and increase the likelihood to witness scientific misconduct  (but see ). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate .
Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.
This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],
Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use . Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems . Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,
Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.
(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)
There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?
Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story. Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,
In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.
[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#
How do you measure or even track all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.
I'm not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it's an opportune time to get all the questions on the table.
As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President's Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US's possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)
I don't have much more to say other than I hope to see some new metrics.
Canadian science policy conferences
We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 - 14, 2010. Called Public Science in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.
The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.
I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.
That’s all of for today.