Transformative idea for peer review: reviewing & grading the reviewers

reviewerDo you feel frustrated with the current peer review system in science?

I have an idea that might help and it involves the revolutionary idea of reviewer accountability.

In other words, authors and grant writers in essence review their reviewers.

I’ve made this all the easier for you with templates that you can simply and quickly fill in and then submit regarding a paper review to the journal editor or regarding a grant review to the funding agency.

Ever wonder what the heck a reviewer was thinking when you got back a review of a paper or grant, but felt helpless other than perhaps considering complaining to a journal editor or funding agency official? Hesitated because you didn’t want to seem like a “trouble maker”?

I don’t mean the usual angst that might accompany a negative outcome. Rather I am more speaking to reviews that are just downright incompetent and unscientific or vindictive.These seem more and more common these days.

Whether it was a review of a grant or a paper, I think most of us have found ourselves in a position where we just felt the reviewer was lousy. Even a few days later after calming down we still believed the reviewer did a bad job to put it simply.

What can we do?

Up until now, not much other than complain.

Perhaps the reviewer did not read the grant or paper carefully, perhaps they were out to kill it, or perhaps for some unknown reason they just did a terrible job.

Shouldn’t such a reviewer be held accountable for that?

Right now, they aren’t.

To my knowledge journals do not track the behavior and competence of their reviewers, but I believe they should. Perhaps editors of journals and grant review officials informally think about and are concerned about reviewers who are bad actors, but only in the rarest of circumstances do they ever do anything about it.

When paper writers or grant proposal submitters complain, they are viewed skeptically and often harshly. Plus, you might ask yourself, how do I even go about properly providing feedback on a reviewer? Who knows.

Therefore, I propose a new, simple system whereby journals and funding agencies keep score over time of how good a job that their specific reviewers do.

They do this based on quantitative feedback from us!

Yes, you read that right.

Reviewers get reviewed. They are held accountable. They get scored.

In such a system, the reviewers review papers or grants as usual, but at the end of the process, say 3 days or 1 week later, the grant applicants or submitters of papers in turn return scores ranking how good a job they think the reviewers did.

Basically review becomes a two-way street.

Yeah, you might say, great idea, but won’t the recipients of reviews almost always harshly grade the reviewers, particularly if a grant is unfunded or paper rejected?

Overall, I don’t think so.

Often times I myself get upset about a negative outcome from a review process whether it is a grant or paper, but in many cases after I calm down I start to see in some reviews that the reviewers made some good points and actually put some work into being a good reviewer. In other words, they were competent. They read the grant or paper. They thought about it. I appreciate that.

In contrast, other reviews are clearly incompetent or have ulterior motives that are all too obvious.

The reason for the 3-7 day post-review waiting period before the grant or paper submitter returns a reviewer assessment score is to give them time to calm down and think it over. This new system I am proposing is not for the purpose of venting hard feelings, but rather for providing data to journals and grant funding agencies to help them determine their best and worst reviewers.

Over time journals and funding agencies such as NIH would start to see patterns of reviewer scores indicating, I would argue, who the good reviewers are and who the not so good reviewers are.

I propose that reviewers consistently receiving say the bottom 10-20% scores would get the boot.

They literally would no longer be invited to be reviewers say for a period of a year. In other other words, reviewers would be held accountable for how good a job they do. Such scores might even come to be part of tenure and promotion packets with faculty indicating proudly (as the case might be) their relatively positive scores as a reviewer.

Sure, you might say, but what about the fact that funding agencies and journals need all the reviewers they can get?

Perhaps, but eliminating 10-20% of the worst performing reviewers would not make much of a dent in the overall pool and would dramatically improve the review system.

Why?

I think it would improve things because even though reviewers would remain anonymous to applicants and paper submitters, as many people (but not all) believe to be important, the reviewers would nonetheless be held accountable for the job they did with consequences! As a result, I believe reviewers would start to take the review process more seriously and be far less likely to behave badly as the classic “reviewer #3” or as I called it “Dr. No“.

I also believe this system would be great for younger scientists including postdocs and students because incorporated into it could be a way for them to get scoring as reviewers even if they worked with their PI to review a paper. Over time such young scientists might grow to be trusted and productive reviewers for editors on their own account even before they officially transition to independence. Thus, such a system might increase the number of vetted reviewers.

Sure, you say, but editors and grant agencies will resist adopting your system because it is more work for them and generally people avoid change.

You are right that there may be some resistance, but I propose that grant submitters and paper submitters simply start sending feedback to funding agencies and journal editors whether they ask for it or not. Over time I think they’ll start using that data to evaluate reviewers. You can use my handy forms that I made and it shouldn’t take more than a few minutes. Then simply include it post-review.

If you feel extra strongly about it, you might consider telling editors you won’t be submitting papers to their journal any more if they do not allow feedback on their reviewers.

The bottom line is that now too many reviewers do a crappy job and are not held accountable, which overall is a fundamental weakness of our current overall system of science.

I can’t think of a more troubling weakness in science today in fact.

I firmly believe that my proposed reviewer scoring system would transform the process for the better.

7 thoughts on “Transformative idea for peer review: reviewing & grading the reviewers”

  1. I rather like your approach. Giving honest feedback has a tendency to improve performance. I have found this to be the case on many occasions. I think one of the problems is that writing a negative review is much harder than doing a positive one if you are to do it well. I find I write much more for a paper I am going to reject than one I will accept. I believe that the “bad” manuscript requires detailed reasons for rejection. This is a lot of hard work! Many reviewers can’t be bothered to take the time and effort. Also, a good reviewer should be pointing the direction for the authors to improve if he believes they are not up to snuff. Another point is that if it is really, really bad, you want to make sure it stays rejected. Sometimes they reappear like zombies from the grave.

  2. Another side point to this discussion is that I always wonder why reviews aren’t double blind. As in the reviewers do not know who’s paper their reviewing just like authors don’t know who their reviewers are. This would eliminate the potential for personal bias so that papers from top notch institutions or ‘big wig’ labs are reviewed in the same way that labs from lower level institutions and less known labs are reviewed. This would involve a simple solution of blocking the listed authors prior to distributing the paper for review. This could eliminate a potential opportunity for political/personal biases during the review process.

  3. Been following your blog for a while. It’s really nice to see a PI caring enough about the scientific enterprise to actually blog about it.

    What do you think about the move towards “transparent” reviews as adopted by EMBO Journal? http://www.nature.com/emboj/about/process.html#Editorial_Process

    I guess the weakness of this system is you’ll never see the reviews of the articles that never made it to publication in journals, although I do believe adopting such a transparent processes in more journals may help reviewers be less “crappy” and force them to become more accountable, since the rest of the world may be seeing their reviews one day.

  4. Interesting proposal. Reforming the peer review system is laudable, but I don’t think your system would work.

    My main objection is pretty much as you put it:
    “Yeah, you might say, great idea, but won’t the recipients of reviews almost always harshly grade the reviewers, particularly if a grant is unfunded or paper rejected?”

    In response, you say that authors overall will respect well aimed negative criticism. With respect, I think that’s a little naive. Good scientists with thick skins will. Good scientists with thin skins might not. Bad scientists definitively won’t. In fact, I’d wager that the very same reviewers who do a bad job will also respond darkly to any negative criticism of their own work. Bitterness borne through bitter experience.

    As a reviewer under your system, if I need to ensure a high enough reviewer rating to get me job, I would avoid reviewing papers that I might have to reject. Reviewers will then tend to review only papers from authors/centres they know good work, or based on abstracts that strongly suggest cutting edge, solid work. Otherwise it would not be worth the risk of incurring an emotionally-biased negative meta-review. Reviewing rates would plummet, leaving many journals fighting for survival.

    Even worse, your system could then lead to a positive selection bias in the peer review system: as a researcher seeking to advance my career, I could just review everything positively, ensuring that I get a positive meta-rating not only from the good scientists who do work but also from the bad/mean scientists who do crap work.

    I like the principle behind your idea, but I think it would be better to have *independent* reviewers of the reviewers. This would increase workload but could be applied to only a percentage of the reviews (e.g. 10% – high enough to encourage reviewers to do a fair and thorough job, but low enough to avoid an explosion of additional work). Yet, even having independent reviewers of reviewers is just another way of doing the editor’s job.

    1. You make some great points, Chris. The current review system is very troubled so some kind of change is in order. I like my proposed system because it gives editors and grant officers real data. Of course they’d have to THINK about the data and interpret it. For example, it’d quickly become apparent that papers that are rejected are associated with worse reviewer scores so they’d have to keep that in mind. Over time my hope, perhaps naive, is that trends would become evident and at the very least data would point to bad actors amongst the reviewers…no?

      1. The only way I could see it working is if the reviewer didn’t know s/he was being counter-reviewed (which is unrealistic of course). Otherwise the reviewer can game the system by always reviewing positively or avoiding reviewing papers that might be bad.

        Over time your system would catch bad reviewers, but it would also mix them indistinguishably with brave (good) reviewers, who dare to review sub-standard papers and grant apps by scientists with thin skins/bad attitudes.

        So for me, unless the meta-reviewing is done independently I don’t think the data produced would be meaningful, and it would deter scientists (especially the all important early-career researchers, who we know make the best reviewers) from contributing to the reviewing process.

        Sorry to sound so negative! I really like the philosophy of increasing accountability of reviewers. I just don’t think that having authors/applicants reviewing the reviewers is the best way to go about it.

        1. So reviewers would be too afraid of being scored negatively by those being reviewed to say anything harsh? Maybe some, but I’d hope some would have the backbone to do a good review job even for a bad paper.

          Perhaps I am a crazy optimist, but I still think overall it’d work. If editors only take action on the bottom 5-10% of scored reviewers that might give reviewers more generally some breathing room to still be honest about a weak paper?

          Reviewer assessment data on accepted papers could also be quite useful and fascinating.

          Anyhow, there is nothing to stop authors, for example, from right now using the form templates I included (or something of their own crafting) and submitting reviews of reviewers starting right now. Of course editors may not pay attention, but I suspect they would pay attention because they’d be curious and interested in the data.

Comments are closed.