Peer review without the peers?

December 11, 2011

Academic peer review tends to be slow, imprecise, labor-intensive, and opaque.

A number of intriguing reform proposals and alternative models exist and hopefully, some of these ideas will lead to improvements. However, whether they do or not, I suspect that some form of peer review will continue to exist (at least for the duration of my career) and that many reviewers (myself included) will continue to find the process of doing the reviews to be time-consuming and something of a hassle.

The most radical solution is to shred the whole process T-Rex style.

Gideon Burton, 2009, cc-by-sa

This is sort of what has already happened in disciplines that use arXiv or similar open repositories where working papers can be posted and made available for immediate critique and citation. Such systems have their pros and cons too, but if nothing else they decrease the amount of time, money and labor that go into reviewing for journals and conferences, while increasing the transparency. As a result, they provide at least a useful complement to existing systems.

Over a conversation at CrowdConf in November, some colleagues and I came up with a related, but slightly less radical proposal: maybe you could keep some form of academic peer review, but do it without the academic peers?

Such a proposition calls into question one of the core assumptions underlying the whole process – that reviewers’ years of training and experience (and credentials!) have endowed them with special powers to distinguish intellectual wheat from chaff.

Presumably, nobody would claim that the experts make the right judgment 100% of the time, but everybody who believes in peer review agrees (at least implicitly) that they probably do better than non-experts would (at least most of the time).

And yet, I can’t think of anybody who’s ever tested this assumption in a direct way. Indeed, in all the proposals for reform I’ve ever heard, “the peers” have remained the one untouchable, un-removable piece of the equation.

That’s what got us thinking: what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?

The way to test the idea would be to try to replicate the outcomes of some existing review process using non-expert reviewers. In an ideal world, you would take a set of papers that had been submitted for review and had received a range of scores along some continuous scale (say, 1 to a protocol to distribute the review process across a pool of 5 – like papers reviewed for ACM Conferences). Then you would develop non-expert reviewers (say, using CrowdFlower or some similar crowdsourcing platform). Once you had review scores from the non-experts, you could aggregate them in some way and/or compare them directly against the ratings from the experts.

2007 diylibrarian cc-by-nc-sa

Would it work? That depends on what you would consider success. I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.

4 Responses to “Peer review without the peers?”

  1. Interesting idea, but…

    Since I don’t want to flood the comments section, here is my reply:

    I am happy about comments and keeping the discussion alive.

    • aaron Says:

      Thanks for the thoughtful response and apologies I didn’t reply sooner, René! Your post makes a number of interesting points. Maybe most importantly I should say that I agree that the fact that reviewing is burdensome sets up some objectionable incentives from the point of view of knowledge production. That said, I think it’s all the more reason we should experiment with all the tools we can that might help us focus the valuable time and energy of trained reviewers exclusively on the process of sorting out which papers are best! With that in mind, I still think it’s worth toying with crowdsourced peer review to figure out how it might be useful.

  2. Rocky Sun Says:

    A similar topic was treated in “The Wisdom of Crowds.” Also, it seems like the technology (or any field, really) develops, the more significant alternative methods of distribution become. If it takes too long to publish, it’s no longer relevant. And the time it takes to develop expert credentials may be more indicative of a detachment from the cutting-edge of the field than someone who immerses themselves in the the industry. In terms of selecting better papers… I’ve heard it said that papers that are the most cited to indicate which ideas are the most developed and/or important. It would be interesting to get an algorithm to sort papers that is similar to the one Google uses to sort the significance of websites…

    • aaron Says:

      Hi Rocky! I think the idea here would be precisely to figure out some ways of cutting down on the time-to-publication for some fields/journals. As it stands, peer review performs a lot of different functions among academics and industry folks alike (legitimation, quality control, status ordering, knowledge dissemination, impact/citation metrics, etc.), so I worry it might be impossible and or terrible to reform with just the speed objective in mind, but a little bit more efficiency wouldn’t be bad.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: