Tag Archives: science metrics

Science publishing, ‘high impact’, reliability, and the practice of science

Konstantin Kakaes has written a provocative and astute article (Feb. 27, 2014 on Slate) about science and publishing, in particular about ‘high impact’ journals.

In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Thanks to SCIgen, for the last several years, computer-written gobbledygook has been routinely published in scientific journals and conference proceedings. [emphasis mine]

Apparently some well known science publishers have been caught (from the Kakaes article; Note: A link has been removed),

According to Nature News, Cyril Labbé, a French computer scientist, recently informed Springer and the IEEE, two major scientific publishers, that between them, they had published more than 120 algorithmically-generated articles. In 2012, Labbé had told the IEEE of another batch of 85 fake articles. He’s been playing with SCIgen for a few years—in 2010 a fake researcher he created, Ike Antkare, briefly became the 21st most highly cited scientist in Google Scholar’s database.

Kakaes goes on to explain at least in part why this problem has arisen,

Over the course of the second half of the 20th century, two things took place. First, academic publishing became an enormously lucrative business. And second, because administrators erroneously believed it to be a means of objective measurement, the advancement of academic careers became conditional on contributions to the business of academic publishing.

As Peter Higgs said after he won last year’s Nobel Prize in physics, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today’s system puts pressure on scientists for, “too fast publication, and to publish too short papers, and the evaluation process use[s] a lot of manpower. It does not give time to become absorbed in a problem as the previous system [did].”

Today, the most critical measure of an academic article’s importance is the “impact factor” of the journal it is published in. The impact factor, which was created by a librarian named Eugene Garfield in the early 1950s, measures how often articles published in a journal are cited. Creating the impact factor helped make Garfield a multimillionaire—not a normal occurrence for librarians.

The concern about ‘impact factors’ high or low with regard to science publishing is a discussion I first stumbled across and mentioned in an April 22, 2010 posting where I noted the concern with metrics extends beyond an individual career or university’s reputation but also affects national reputations. Kostas Kostarelos in a Jan. 24, 2014 posting on the Guardian science blogs notes this in his discussion of how China’s policies could affect the practice of science (Note: Links have been removed),

…  For example, if a Chinese colleague publishes an article in a highly regarded scientific journal they will be financially rewarded by the government – yes, a bonus! – on the basis of an official academic reward structure. Publication in one of the highest impact journals is currently rewarded with bonuses in excess of $30,000 – which is surely more than the annual salary of a starting staff member in any lab in China.

Such practices are disfiguring the fundamental principles of ethical integrity in scientific reporting and publishing, agreed and accepted by the scientific community worldwide. They introduce motives that have the potential to seriously corrupt the triangular relationship between scientist or clinician, publisher or editor and the public (taxpayer) funding agency. They exacerbate the damage caused by journal quality rankings based on “impact factor”, which is already recognised by the scientific community in the west as problematic.

Such measures also do nothing to help Chinese journals gain recognition by the rest of the world, as has been described by two colleagues from Zhejiang University in an article entitled “The outflow of academic articles from China: why is it happening and can it be stemmed?”.

At this point we have a system that rewards (with jobs, bonuses, etc.) prolific publication of one’s science achieved either by the sweat of one’s brow (and/or possibly beleaguered students’ brows) or from a clever algorithm. It’s a system that encourages cheating and distorts any picture we might have of scientific achievement on a planetary, national, regional, university, or individual basis.

Clearly we need to do something differently. Kakaes mentions an initiative designed for that purpose, the San Francisco Declaration on Research Assessment (DORA). Please do let me know in the Comments section if there are any other such efforts.

Measuring professional and national scientific achievements; Canadian science policy conferences

I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),

Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.

Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.

I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],

Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research [27], [28] but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century [29]. Since then, the competitiveness and precariousness of scientific careers have increased [30], and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity [1], and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions [31]. Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals [32] and increase the likelihood to witness scientific misconduct [33] (but see [34]). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate [35].

Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.

This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use [1]. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems [2]. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.

(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)

There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?

Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story.  Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,

In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.
http://pdf.com.ru/a/ky1300.pdf

[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article  which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#

How do you measure or even track  all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.

I’m not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it’s an opportune time to get all the questions on the table.

As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as  patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President’s Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US’s possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)

I don’t have much more to say other than I hope to see some new metrics.

Canadian science policy conferences

We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 – 14, 2010. Called Public Science  in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.

The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.

I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.

That’s all of for today.