I’ve been meaning to write a piece about science publishing and peer review in the light of a number of recent articles and postings on the subject. As there hasn’t been anything new for at least three or four days now this might be an opportune moment.
I did touch on a related topic in an April 22, 2010 posting where I focused amongst other issues on a paper about publication bias. From my posting (quoting a news item on physorg.com)
Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.
Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.
These papers with their publication bias would have, for the most part if not all, been peer-reviewed which time-honoured system is currently being tested in a number of ways.
Scientists spend too much of their time publishing papers and ploughing through the mountains of papers produced by their colleagues, and not enough time doing science.
That’s the observation – and frustration – that spurred Fabio Casati and his collaborators to launch LiquidPublication, an EU-financed [European Union] research project that seeks to revolutionise how scientists share their work and evaluate the contributions of their peers.
“The more papers you produce, the more brownie points you get,” says Casati. “So most of your time is spent writing papers instead of thinking or doing science.”
Besides wasting untold hours, Casati says, the current scientific publication paradigm produces other toxic fallout including an unduly heavy load for peer reviewers and too many papers that recycle already published research or dribble out results a bit at a time.
“The current system generates a tremendous amount of noise,” he says. “It’s hard to find interesting new knowledge because there’s so much to see.”
Casati and his colleagues are developing and promoting a radically new way to share scientific knowledge, which they call “liquid publication”. They want to tap the power of the Web – including its ability to speed communication, facilitate data storage, search and retrieval, and foster communities of interest – to replace traditional peer reviews and paper publications with a faster, fairer and more flexible process. [emphasis mine]
David Bruggeman at Pasco Phronesis commented on this project,
The project acknowledges the influence of arXiv.org, but would have some important differences. The plan includes having scientists and so-called ‘invisible colleges’ of researchers develop their own journals which would be created via the platform. There is also the thought that readers of these papers and journals could add value by linking related papers.
David goes on to give support for it while noting that LiquidScience should not be used in the place of peer review and that the more means of publishing research and critiquing it, the better.
The August 2010 issue of The Scientist features three articles on peer review. From the Breakthroughs from the Second Tier article by the staff,
Often the exalted scientific and medical journals sitting atop the impact factor pyramid are considered the only publications that offer legitimate breakthroughs in basic and clinical research. But some of the most important findings have been published in considerably less prestigious titles.
Take the paper describing BLAST—the software that revolutionized bioinformatics by making it easier to search for homologous sequences. This manuscript has, not surprisingly, accumulated nearly 30,000 citations since it was published in 1990. What may be surprising, however, was the fact that this paper was published in a journal with a current impact factor of 3.9 (J Mol Biol, 215:403–10, 1990). In contrast, Nature enjoys an impact factor more than 8 times higher (34.5), and Science (29.7) is not far behind.
One of the most commonly voiced criticisms of traditional peer review is that it discourages truly innovative ideas, rejecting field-changing papers while publishing ideas that fall into a status quo and the “hot” fields of the day—think RNAi, etc. [emphasis mine] Another is that it is nearly impossible to immediately spot the importance of a paper—to truly evaluate a paper, one needs months, if not years, to see the impact it has on its field.
Jef Akst offers a specific example in his article, I Hate Your Paper,
Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a “ridiculous” comment. “The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published,” he recalls, but the study was not about structure. The x-ray crystallography results, therefore, “had nothing to do with that,” he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.
Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it’s employed by higher-impact journals. Theoretically, peer review should “help [authors] make their manuscript better,” he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons—if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example—or simply for convenience, since top journals get too many submissions and it’s easier to just reject a paper than spend the time to improve it. [emphasis mine] Regardless of the motivation, the result is the same, and it’s a “problem,” Kaplan says, “that can very quickly become censorship.”
In the third article, this one by Sarah Greene, there’s mention of a variation on the traditional peer review, post-publication peer review (PPPR),
In the basic formulation of PPPR, qualified specialists (peers) evaluate papers after they are published. Instead of hiding reviewers’ identities and comments, they become part of the published record and open to community review and response. Renowned educator Paolo Freire once said, “To impede communication is to reduce men to the status of things.” PPPR at its best facilitates ongoing dialogue among authors, peer reviewers, and readers.
Presumably, PPPR will be part of the LiquidPublication experience. Interestingly, in a recent article on Techdirt (a site focused on intellectual property issues), there was this mention of PPPR,
Apparently, people are realizing that a much more open post-publication peer review process, where anyone can take part, is a lot more effective:
We are starting to see examples of post-publication peer review and see it radically out-perform traditional pre-publication peer review. The rapid demolition […] of the JACS hydride oxidation paper last year (not least pointing out that the result wasn’t even novel) demonstrated the chemical blogosphere was more effective than peer review of one of the premiere chemistry journals. More recently 23andMe issued a detailed, and at least from an outside perspective devastating, peer review (with an attempt at replication!) of a widely reported Science paper describing the identification of genes associated with longevity. This followed detailed critiques from a number of online writers.
I’m not sure I’m ready to get quite as excited about PPPR as some of its supporters do. Traditional peer review is not the only process that can be manipulated as the recent events with Virology Journal point out. I first came across the incident in a Fast Company (which mostly focuses on business, marketing, design, and technology) in an article by Davdi Zax,
It must get tedious sometimes, running a scientific journal–all that dull data, all those pesky p-values. Wouldn’t it be cool if science journals had accounts of Biblical miracles, and speculation on events thousands of years in the past? That seems to be what the editors of Virology Journal were thinking, when they decided to publish a speculative analysis of a Biblical miracle by Ellis Hon et al., of Hong Kong.
Even from the very first sentence of the abstract, which mentions a woman with a fever cured “by our Lord Jesus Christ,” it ought to have been clear to the article’s reviewers that it was not written to the highest objective scientific standards. The authors go on to present evidence that the woman likely had the flu: “The brief duration, high fever, and abrupt cessation of fever makes influenza disease probable.”
The paper was swiftly eviscerated online, particularly on the blog Aetiology.
An apology was issued both the editor and the author fairly soon after, as per this news item on physorg.com,
Editor-in-Chief of the journal, Robert F. Garry, publicly apologized for publishing the article, saying it “clearly does not provide the type of robust supporting data required for a case report and does not meet the high standards expected of a peer-reviewed scientific journal.” He also apologized for any “confusion or concern” the article may have created among readers.
One of the blogs that brought the paper to notice was This Scientific Life, by Bob O’Hara. O’Hara said the lead author of the paper, Kam L.E. Hon from the Department of Paediatrics at the Chinese University of Hong Kong, had replied by email to his queries and confirmed he had agreed to the retraction and was “astonished” the article had produced such a negative response since it was only intended for thought provocation. He went on to apologize for the inconvenience caused to the Journal and anxiety caused to himself. He said he would never to write this kind of article again. [emphasis mine]
You might think it was a bad piece of science that was caught by the vigilant online community but according to an August 17, 2010 posting by Kent Anderson at the Scholarly Kitchen,
Recently, BioMed Central’s Virology Journal published a case report speculating that the woman in the Biblical story in which Jesus cures her of fever was suffering from the flu. The case report was obviously quite tongue-in-cheek, akin to many others in the literature, but also applied clinical reasoning to the scant evidence offered by the Bible.
In most case reports that seek to plumb historical facts, investigators review documentation, try to translate what they can into modern meaning, then attempt a diagnosis, usually for the sport of it.
I’ll wager that the authors and editors expected this little bit of fluff to pass quietly into oblivion, a harmless lark in an obscure journal. It’s not an unreasonable expectation. In the traditional journal world, reports like this were shielded from widespread evaluation due to relatively small circulations in tight-knit communities. Even in the last decade, the lack of robust commenting on journal articles has helped insulate scholars.
Today, things are different. Now, a science blogosphere bent on sensationalism and hungry for topics is perfectly willing to pick up on a silly article and beat the bejeezus out of it.
Sometimes people behave badly. No system is a perfect bulwark against this tendency. So while the Virology Journal article had been peer-reviewed (which has its own problems), it was the set of post-publication reviews which resulted in an apology both from the editor and the author who has promised he’ll never write this type of article again. In essence, a kind of mob mentality seems to have ruled and I expect that mob mentality will be seen in the PPPR process as well. My conclusion is that the more ways we have of disseminating and publishing information the better.