Much ado about nothing? Oct 8. 2010 | Comments (2)
Philip Davis over at The Scholarly Kitchen helpfully pointed out some surveys that bring data to bear on the basic premise which motivates PubCreds: that the peer review system is indeed breaking down (or at least is likely to do so) because of too many papers chasing too few willing referees. I want to discuss these studies in some detail, in part because at least some people see these studies as clear-cut, unambiguous evidence that there are no problems in the current peer review system and that no problems are likely to develop.
I'll discuss in most detail the 2007 global survey of 3040 academics commissioned by the Publishing Research Consortium asked about attitudes towards, and experiences with, the peer review system (a 2009 follow up survey of 4000 academics found similar results in those areas that were examined in both surveys) The most relevant findings, with my comments:
-64% satisfied with the current system, 12% dissatisfied, 22% neither satisfied nor dissatisfied. Whether you regard that as cause for concern or not probably depends on whether you're a glass half full or glass half empty person.
-32% agree and 36% disagree that the current system is the best achievable, 31% neither agree nor disagree. 32% agree that peer review needs a complete overhaul, 35% disagree, 30% neither agree nor disagree. This, on the other hand, is hard to regard as anything other than worrisome. Clearly there's a widespread perception of serious problems. While cross-tabs weren't provided, presumably many of those 64% who are satisfied with the current system are satisfied only because they don't see any better alternatives, not because they think the current system is problem-free. Otherwise why would only about 1/3 agree that the system is the best achievable, and disagree that it needs a complete overhaul?
-Respondents were asked about, and gave at most mixed support for, many possible modifications and alternatives to the peer review system, designed to address a variety of issues. Widespread dissatisfaction with ideas like 'crowdsourcing' indicates a need for new ideas which weren't asked about--like PubCreds. But in fairness, reactions to a briefly-summarized idea may not be representative of reactions based on more information and discussion. Numerous commenters on PubCreds have told us they were initially skeptical but were converted after reading our article. Opinion polls aren't a substitute for informed debate.
-Respondents reviewed an average of 8 papers in the previous 12 months, but the distribution is skewed: 59% had reviewed 5 or less, while 18% had reviewed 11 or more. That is, a relatively small fraction of people do a relatively large fraction of the reviewing. The distribution of publications (and presumably submissions) is also skewed. Respondents reported an average of 8-9 publications in the last 12 months, with 46% reporting 5 or less and 24% reporting 11 or more. Unfortunately, the study doesn't report detailed correlations of how much reviewing people do in relation to how much they submit. But subgroups that publish more papers (e.g., older researchers) do also tend to review more. So in broad terms, people who submit more also review more. The idea that we want our best people to be able to publish a lot, and also submit a lot, has sometimes been presented as a criticism of PubCreds, the assumption being that PubCreds would somehow level the scientific meritocracy. But that's not the case. A system in which people obey 'the golden rule of reviewing' (review 2-3 times as many papers as you submit) is, in both principle and (at least broadly speaking) in practice, consistent with a system in which a small fraction of 'leaders in the field' do a large fraction of the publishing and reviewing. The devil of course, is in the details. The looser the correlation between submitting and reviewing is, the stronger the argument for PubCreds. But there's no reason to think that PubCreds would be anti-meritocratic. Heck, PubCreds might well encourage some highly-productive authors to review more, which would increase the amount of reviewing done by leading scientists (and without forcing them to publish less; more on this below).
-On average, respondents were prepared to review a maximum of 9 papers/year. In other words, on average reviewers are almost maxed out, since they report reviewing 8 papers/year on average. I suspect this is not a coincidence--probably a fair number of people feel very busy, and so basically feel like they're currently doing all they can in every area of their professional lives. 'Active' reviewers (those who do 6 or more reviews/year, comprising 44% of all reviewers) consider themselves overloaded on average--this group reports doing an average of 14 reviews/year compared to a stated 'maximum' of only 13. 'Active' reviewers are responsible for 79% of all reviews. So in other words, the minority of people who do the majority of the reviewing also feel themselves to be overloaded. That strikes me as a rather precariously balanced state of affairs--what happens if that overworked minority starts declining a greater proportion of requests? Again, just because there are good reasons for this skewed distribution doesn't mean that the system isn't also made fragile by that very same skew.
-Reviewers decline an average of 2.1 invitations to review per year, which would be about 20% of all requests. That's a lower percentage than I would've guessed, so I actually find that number kind of reassuring. Active reviewers decline a lower proportion of invitations, which is unsurprising since editors, or the editorial staff, tend to steer review requests towards those who they know will agree. By far the most common reason for declining to review is lack of time, either because of other commitments (including other reviewing) or the journal's short deadline. This touches on a misunderstanding I've encountered occasionally. It's claimed that the 'real' problem is that academics increasing demands on their time, and that PubCreds does nothing to address that. In fact, the need for PubCreds has nothing to do with why reviewers decline to review. Reviewers could decline because the voices in their head told them to, and the effect would be the same. PubCreds isn't a way to force some people to submit less in order to publish more, as if submitting less was the only way to free up time for more reviewing. PubCreds is a way to force people to, over the course of their careers, review in appropriate proportion to their submitting. How they allocate their time in order to meet that standard is entirely up to them. Indeed, one could argue that it's precisely because academics are so busy that a PubCred system is needed. If you're a busy academic who struggles to find the the time to do everything you have to do, you might well free up some time by declining requests to review, since after all reviewing is 'optional'.
-Respondents were evenly split on whether peer review takes too long. Another piece of evidence that the current system, while not necessarily broken, is not in great shape. I certainly wouldn't claim that speed is everything--but it's something. One key question is how journals and reviewers respond to the widespread desire for more rapid review. According to the survey, rejection without review for reasons of merit is relatively uncommon overall (<15% of mss suffer this fate). But at certain leading journals my experience is that it's rapidly become commonplace.
-I'm a little surprised at the relatively substantial numbers of respondents (about 40-50%) who said that the usual incentives to review (e.g., payment in kind such as waiver of page charges; free journal subscription for a year) affect their willingness to review. In part, these numbers are driven by respondents who lack the financial or institutional resources to pay for page charges and subscriptions. Those respondents quite rightly value anything that lowers the access barriers they face.
-One limitation of these surveys is that they're snapshots (well, the 2007 and 2009 surveys provide a very short time series). Those who focus only on the reassuring numbers in these surveys (and there are absolutely many reassuring numbers) are implicitly downplaying concerns that spring from the anecdotal, but longer-term, experiences of many researchers. My own experience is that the current state of the system is worse than it was 15 years ago, and many of my colleagues feel the same. Someone should perhaps do a survey on that...
Overall, I think these results paint a picture of a system that's functioning, but that is vulnerable. It's a system that relies heavily on a small fraction of people (and for good reasons, probably always will). It's a system in which reviewers describe themselves as operating at near- or above-maximum capacity. And it's a system which relies on people to be selfless in the face of strong incentives not to be (and as far as I know, the existence of those incentives hasn't been questioned, even by those who believe the current system is in great shape). Those features make the system vulnerable. And the system is indeed widely (though by no means universally) perceived to have serious problems. Brushing off such widespread concerns as whining, or 'anecdotal', or unimportant symptoms of the fact that, hey, academics are busy, seems to me to be a fairly complacent stance. Far better to actually dig deeper, find out what the core problems are and how fast they're getting worse, and in the meantime start thinking ahead about possible solutions. That's what Owen and I are trying to do.