Editors’ Forum Supplemental: Manuscript Hand-offs

In the next few of these posts, I aim to go solo and reflect on some of the questions posed on Facebook by the various communities that we didn’t get around to answering in the journal editors’ forum.

q10 snip.PNG

Thanks, Brian!

Suggestion 1 is one I have seen made before. It may even be policy at some journals, though I can’t recall which ones. Taking another journal’s reviews as-is, however, requires some coordination. The accepting journal doesn’t want papers with fatal flaws. It doesn’t want papers that had to be transformed in their theory and analysis so much that they’re different papers, because now the old reviews may not apply.

What it wants is papers where the initial reviews found the method and results to be basically sound or at least fixable; but rejection came mainly because they fell short of the originating journal’s standards for innovation, theoretical development, depth of evidence, or amount of evidence. And there really has to be a step difference in those standards from one journal to another. Probably this is best handled by the lower-impact journal editor-in-chief approaching the higher-impact one and setting up the hand-off agreement. Yes, there is a little awkward social hierarchy reinforcement here, but it’s all for the best. (Hey, in the grand scheme of things, all E-i-Cs are cool kids – or at least, alpha nerds.)

There would also be some finesse in the hand-off. Reviews are the intellectual property of their authors, so consent to pass them on would have to be given, most feasibly at the point when the  review is solicited. The authors would have to agree, too, but I don’t see anyone not jumping at the chance to shave six or more months off the publication process. Editors at the originating journal would have to apply some discretion to make sure that only papers with a  reasonable chance are handed off. None of this, though, is unworkable.

(Brian’s second suggestion, “flipped” process, is creative. But imagine (for example) JESP, JRP, PSPB, and SPPS all getting in the pool together. I would have to not only look at the 600+ manuscripts we receive yearly, but the X number of relevant experimental social manuscripts that PSPB and SPPS receive, potentially doubling or tripling load. A lot less fun than an actual pool party with those guys.)

At JESP, the most obvious journal to solicit hand-offs from would be Journal of Personality and Social Psychology – at least the first two sections, which are most likely to have experimental social content. I think we can all agree that for a long time now, JPSP has expected more in terms of number of studies, theoretical innovation, and empirical development than any other social psychology journal. One more thing on my to-do list, then, is to approach Eliot Smith and Kerry Kawakami, the relevant section editors, with that proposal.

I’m also interested in hearing from other journals who think they could accept hand-offs from JESP. In case it wasn’t clear, I do think that every journal in the hierarchy plays its part. Work that makes smaller contributions deserves to be seen, not least for showing converging evidence on an established effect. Since 2011 that has only gotten more important. And the quantity of research, the pressure in the pipeline, means that even lower-impact journals can pick and choose and enforce standards effectively.

Finally, many people don’t see the obvious drawback of the hand-off innovation as being a drawback at all. Under hand-offs, on average there will be less reviewing, fewer different opinions bearing on any given paper (the same goes for my editorial decision to avoid reviewing revisions, which has also been pretty popular as far as I can tell). If more reviewing and editing is not seen as a good thing, this tells me that:the long and onerous process still doesn’t seem to be delivering consistent quality control. Adding more voices to the review process is seen to add noise rather than precision.

I think I know what the reason is, too. A review consists of a number of facts embedded in a set of opinions. Or at least, the more facts a review contains, the better it is. The same facts can elicit many different opinions. For some, a given limitation is but a foible, while for others, that same limitation is a fatal flaw.

The more the editorial process consists of just adding up opinions – or worse yet, a disjunctive, “Anna Karenina” rule in which one negative opinion is enough to sink the ship – the more it is seen as fundamentally capricious and chaotic, so that extending it only increases the pain of this arbitrary gantlet.

But if the review process consists of adding up facts into a big picture and evaluating those facts, regardless of opinion, then more facts are better, and the only question is when the saturation point of fact-finding is reached. I surely have more to say on this matter, but I hope it’s obvious that in my own editorial work and in guiding my associate editors, I encourage the higher-effort, fact-based approach wherever possible.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: