Editors’ Forum Supplemental: Improving Journal Performance

Here’s my individual answer to another question from the SPSP forum we never got around to answering.

q11

This is referring to Tetlock’s work on identifying “superpredictors”and more generally improving performance  within geopolitical prediction markets. In those studies, the target outcome is clear and binary: Will the Republicans or Democrats control the Senate after the next election? Will there be at least 10 deaths in warfare in the South China Sea in the next year? Here, Brent suggests that editorial decisions can be treated like predictions of a paper’s future citation count, which in turn feeds into most metrics that look at a journal’s impact or importance.

Indeed, prediction markets have been used as academic quality judgments in a number of areas: for example, the ponderous research quality exercises that we in the UK are subject to, or the Reproducibility Project: Psychology (I was one of the predictors in that one, though apparently not a superpredictor, because I only won 60 of the 100 available bucks). But the more relevant aspect of Tetlock’s research is the identification of what makes a superpredictor super. In a 2015 Perspectives article, the group lists a number of  factors identified through research. Some of them are obvious at least in hindsight, like high cognitive ability and motivation. Others seem quite specific to the task of predicting geopolitical events, like unbiased counterfactual thinking.

There’s a reason, though, to be skeptical of maximizing the citation count of articles in a journal. [Edit: example not valid any more, thanks Mickey Inzlicht for pointing this out on Facebook!] If I had to guess, subjective journal prestige would probably be predicted best by a function that positively weights citation count and negatively weights topic generality. That is, more general outlets like Psych Science have more potential people who would cite them, independently of prestige within a field.

More fundamentally, trying to game citation metrics directly might be bad overall for scientific reporting. Admittedly, there is very little systematic research into what makes an article highly cited, especially within the kind of articles that any one journal might publish (for example, I’d expect theory/review papers to have a higher count than original research papers). But in trying to second-guess what kind of papers might drive up impact ratings, there is the danger of:

  • Overrating papers that strive for novelty in defining a paradigm, as opposed to doing important work to validate or extend a theory, including replication.
  • Overrating bold statements that are likely to be cited negatively (“What? They say that social cognition doesn’t exist? Ridiculous!”)
  • Even more cynically, trying to get authors to cite internally within a journal or institution to drive up metrics.  From what I have seen in a few different contexts, moves like this tend to be made with embarrassment and met with resistance.
  • Ignoring other measures of relevance beyond academic citations, like media coverage (and how to tell quality from quantity here? That’s a whole other post I’ve got in me.)

So really, any attempt to systematically improve the editorial process would really  have to grapple with a very complicated success metric whose full outcome may not be clear for years or decades. Given this, I’d rather focus on standards, and trust that they will be rewarded in the metrics over the long term.

But one last thing: It’s hard to ignore that methods papers, if directly relevant to research, seem to have a distinct advantage in citations. For example, in the top 10 JESP citations, three have to do with methods, a rate far higher than the overall percentage of methods papers in the journal. In Nature‘s top 100 cited papers across all sciences , the six psychology/psychiatry articles that make the cut all have to do with methods – either statistics, or measurement development for commonly understood constructs such as handedness or depression. (Eagle eyes should notice that a lot of the rest are methods development in biology.) So, although I had other reasons for calling for more researcher-ready methods papers in my JESP editorial, I have to say that such useful content in a journal isn’t so bad for the citation count, either.