Tag Archives: Eugene Garfield

Science publishing, ‘high impact’, reliability, and the practice of science

Konstantin Kakaes has written a provocative and astute article (Feb. 27, 2014 on Slate) about science and publishing, in particular about ‘high impact’ journals.

In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Thanks to SCIgen, for the last several years, computer-written gobbledygook has been routinely published in scientific journals and conference proceedings. [emphasis mine]

Apparently some well known science publishers have been caught (from the Kakaes article; Note: A link has been removed),

According to Nature News, Cyril Labbé, a French computer scientist, recently informed Springer and the IEEE, two major scientific publishers, that between them, they had published more than 120 algorithmically-generated articles. In 2012, Labbé had told the IEEE of another batch of 85 fake articles. He’s been playing with SCIgen for a few years—in 2010 a fake researcher he created, Ike Antkare, briefly became the 21st most highly cited scientist in Google Scholar’s database.

Kakaes goes on to explain at least in part why this problem has arisen,

Over the course of the second half of the 20th century, two things took place. First, academic publishing became an enormously lucrative business. And second, because administrators erroneously believed it to be a means of objective measurement, the advancement of academic careers became conditional on contributions to the business of academic publishing.

As Peter Higgs said after he won last year’s Nobel Prize in physics, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today’s system puts pressure on scientists for, “too fast publication, and to publish too short papers, and the evaluation process use[s] a lot of manpower. It does not give time to become absorbed in a problem as the previous system [did].”

Today, the most critical measure of an academic article’s importance is the “impact factor” of the journal it is published in. The impact factor, which was created by a librarian named Eugene Garfield in the early 1950s, measures how often articles published in a journal are cited. Creating the impact factor helped make Garfield a multimillionaire—not a normal occurrence for librarians.

The concern about ‘impact factors’ high or low with regard to science publishing is a discussion I first stumbled across and mentioned in an April 22, 2010 posting where I noted the concern with metrics extends beyond an individual career or university’s reputation but also affects national reputations. Kostas Kostarelos in a Jan. 24, 2014 posting on the Guardian science blogs notes this in his discussion of how China’s policies could affect the practice of science (Note: Links have been removed),

…  For example, if a Chinese colleague publishes an article in a highly regarded scientific journal they will be financially rewarded by the government – yes, a bonus! – on the basis of an official academic reward structure. Publication in one of the highest impact journals is currently rewarded with bonuses in excess of $30,000 – which is surely more than the annual salary of a starting staff member in any lab in China.

Such practices are disfiguring the fundamental principles of ethical integrity in scientific reporting and publishing, agreed and accepted by the scientific community worldwide. They introduce motives that have the potential to seriously corrupt the triangular relationship between scientist or clinician, publisher or editor and the public (taxpayer) funding agency. They exacerbate the damage caused by journal quality rankings based on “impact factor”, which is already recognised by the scientific community in the west as problematic.

Such measures also do nothing to help Chinese journals gain recognition by the rest of the world, as has been described by two colleagues from Zhejiang University in an article entitled “The outflow of academic articles from China: why is it happening and can it be stemmed?”.

At this point we have a system that rewards (with jobs, bonuses, etc.) prolific publication of one’s science achieved either by the sweat of one’s brow (and/or possibly beleaguered students’ brows) or from a clever algorithm. It’s a system that encourages cheating and distorts any picture we might have of scientific achievement on a planetary, national, regional, university, or individual basis.

Clearly we need to do something differently. Kakaes mentions an initiative designed for that purpose, the San Francisco Declaration on Research Assessment (DORA). Please do let me know in the Comments section if there are any other such efforts.