Peer review, reviewed

Daniel Engber of Slate has some opinions on the future of peer-review for journal articles. Given the shoddy work that sometimes passes through peer review, he recommends a sea-change in the way we look at new scientific results:

Paul Ginsparg, who runs a digital archive for unpublished physics papers, has suggested that putting "preprints" of scientific papers on the Web could let the community as a whole decide which papers are most useful. Unpublished work could be tracked by an objective measure—like how often it's cited or downloaded—and then passed along for formal publication. Government funders like the NIH could hire professional reviewers to evaluate grants, or they could replace grants with cash prizes for successful research.

What I find most striking is that people have been studying whether peer-review of submitted manuscripts actually improves the quality of science. It turns out, it's not so easy to prove:

The study of peer review turns out to be tremendously difficult. To test whether it works, you'd need to compare the quality of papers that had gone through peer review with the quality of those that hadn't. But how would you get papers for the control group, given all the professional benefits that come with peer review? And assuming you could convince scientists to forgo the process, how could you objectively judge the quality of the papers? At Rennie's fifth congress this year in Chicago, several hundred studies will be presented, but no one will claim to have answered the big question: Does peer review work?

(It reminds me of the difficulty in producing evidence for evidence-based medicine.)

The article doesn't dwell on it too much, but peer review faces at least one structural problem: expertise is getting narrower and narrower. If you submit a your latest methods paper to a small, field-specific journal, chances are good the editors will farm out the review to someone you know. They may even run a lab in competition with yours.

I've heard stories about five-page manuscripts that come back covered with red ink, and ten pages of typed comments on why the authors' methods are flawed, their results dubious, and their conclusions irresponsible. I've also seen manuscripts returned with nary a stray red mark.

This point was driven home to me during a graduate school class, in which a professor showed the beginning of a recent "anonymous review" on an overhead transparency. It went something like this:

"The authors of this groundbreaking manuscript have a long history of innovation in this field."

The professor blushed, and wrote:

TRANSLATION: "Hey, buddy! The journal sent us one of your papers to review again! How's it going?"

I'm still not sure how bad the problem is. Maybe these anecdotes aren't representative. And journals do take steps to prevent pettiness or chuminess from influencing the review process. Also, all the reviewers I've known take their role very seriously (except, of course, when we were poking fun at the egregious mistakes of some submitters. Ah, memories).

Who knows -- maybe Ginsparg and Engber are on to something, and in the future, all submissions will go straight to the web, where an army of scientist-bloggers stand ready to critique and praise (and copy?). This is a format that has some precedent -- like open-source coding, or those websites where musicians encourage remixes of their songs.

Science as it's practiced now would look pretty different. The emphasis would shift from blockbuster papers nurtured in secrecy, to discrete chunks of cross-referenced data, dribbled out piecemeal. Writers and reviewers alike would save time, leaving them to do what they love most. But it's still an open question how grants would be awarded under this scheme, and indeed, if it even drives knowledge forward, faster.