Justify Your Alpha … For Its Audience

In this light, it seems pertinent to ask, what is the fundamental purpose of a scientific publication? Although it is often argued that journals are a “repository of the accumulated knowledge of a field” (APA Publication Manual, 2010, p. 9), equally important are their communicative functions. It seems to us that the former purpose emphasizes perfectionism (and especially perfectionistic concerns) whereas the latter purpose prioritizes sharing information so that others may build on a body of work and ideas (flawed as they may eventually turn out to be).

(Reis & Yee, 2016)

There’s been a pushback against the call for more rigorous standards of evidence in psychology research publishing. Preregistration, replication, statistical power requirements, and lower alpha levels for statistical significance, all have been criticized as leading to less interesting and creative research (e.g., Baumeister, 2016; Fiedler, Kutzner, & Krueger, 2012; and many, many personal communications.) After all, present-day standards completely exclude from the scientific record any individual studies that do not meet significance. So, wouldn’t stricter standards expand the dark reach of the file-drawer even further? I want to show a different way out of this dilemma.

First things first. If creativity conflicts with rigor, it’s clear to me which side I chose 30 years ago. By deciding to go into social science rather than humanities, I chose a world where creativity is constrained by evidence. If you want to maximize your creativity, you’re in the wrong business. In a simulation of publishing incentives, Campbell and Gustafson (2018; pdf) show that adopting a lower threshold for significance is most helpful in avoiding false positives when studying novel, unlikely, “interesting” effects. It is precisely because counterintuitive ideas are likely to be untrue that we need more rigor around them.

bunny
Maximum creativity

One call for increased rigor has involved lowering significance thresholds, specifically alpha = .005 (old school, for the hipsters: Greenwald et al., 1996 (pdf); new school: Benjamin et al., 2017). The best-known response, “Justify your alpha” (Lakens et al., 2017), calls instead for a flexible and openly justified standard of significance. It is not crystal clear what criteria should govern this flexibility, but we can take the “Justify” authors’ three downsides of adopting an alpha=.005 as potential reasons for using a higher alpha. There are problems with each of them.

Risk of fewer replication studies: “High-powered original research means fewer resources for replication.” Implicitly, this promotes a higher alpha for lower-powered exploratory studies, using the savings to fund high-powered replications that can afford a lower alpha (Sakaluk, 2016; pdf). Problem is, this model has been compared in simulations to just running well-powered research in the first place, and found lacking — ironically, by the JYA lead author. If evidence doesn’t care about the order in which you run studies, it makes more sense to run a medium-sized initial study and a medium-sized replication, than to run a small initial study and a large replication, because large numbers have statistically diminishing returns. The small study may be economical, but it increases the risk of running a large follow-up for nothing.

Risk of reduced generalizability and breadth: “Requiring that published research meet a higher threshold will push the field even more to convenience samples and convenient methods.” Problem is, statistical evidence does not care whether your sample is Mechanical Turk or mechanics in Turkey. We need to find another way to make sure people get credit for doing difficult research. There’s a transactional fallacy here, the same one you may have heard from students: “I pay so much tuition, worked so hard on this, I deserve a top grade!” Like student grades, research is a truth-finding enterprise, not one where you pay your money and get the best results. To be clear (spoiler ahoy!) I would rather accommodate difficult research by questioning what we consider to be a publishable result, rather than changing standards of evidence for a positive proposition.

Risk of exaggerating the focus on single p-values. “The emphasis should instead be on meta-analytic aggregations.” Problem is, this issue is wholly independent from what alpha you choose. It’s possible to take p = .005 as a threshold for meta-analytic aggregation of similar studies in a paper or even a literature (Giner-Sorolla, 2016; pdf), just as p = .05 is now the threshold for such analyses. And even with alpha = .05, we often see the statistical stupidity of requiring all individual results in a multi-study paper to meet the threshold in order to get published.

See, I do think that alpha should be flexible: not according to subjective standards based on traits of your research, but based on traits of your research audience.

Reis and Yee, in their quote at the top, are absolutely right that publishing is as much a communication as a truth-establishing institution. Currently, there is a context-free, one-size-fits-all publishing model where peer-reviewed papers are translated directly to press releases, news stories, and textbook examples. But I believe that the need to establish a hypothesis as “true” is most important when communicating with non-scientific audiences — lay readers, undergraduate students, policymakers reading our reports.  Specifically, I propose that in an ideal world there should be:

  • no alpha threshold for communications read by specialists in the field
  • a .05 threshold for reporting positive results aimed at research psychologists across specialties
  • a .005 threshold (across multiple studies and papers) for positive results that are characterized as “true” to general audiences

(Yes, firmly negative results should have thresholds too — else how would we declare a treatment to be quackery, or conclude that there’s essentially no difference between two groups on an outcome? But on those thresholds, there’s much less consensus than on the well-developed significance testing for positive results. Another time.)

No threshold for specialists. If you’re a researcher, think about a topic that’s right on the money for what you do, a finding that could crucially modify or validate some of your own findings. Probably, if the study was well-conducted, you would want to read it no matter what it found. Even if the result is null, you’d want to know that in order to avoid going down that path in your own research. This realization is one of the motivators towards increasing acceptance of Registered Reports (pdf), where the methods are peer-reviewed, and a good study can be published regardless of what it actually finds.

.05 threshold for non-specialist scientific audiences. Around researchers’ special field, there is an area of findings that could potentially inspire and inform our work. Social psychologists might benefit from knowing how cognitive psychologists approach categorization, for example. There may not be enough interest to follow null findings in these areas, but there might be some interest in running with tentative positive results that can enrich our own efforts. Ideally, researchers would be well-trained enough to know that p = .05 is fairly weak evidence, and treat such findings as speculative even as they plan further tests of them. Having access to this layer of findings would maintain the “creative” feel of the research enterprise as a whole, and also give wider scientific recognition to research that has to be low-powered.

.005 threshold for general audiences. The real need for caution comes when we communicate with lay people who want to know how the mind works–including our own undergraduate students. Quite simply, the lower the p-value in the original finding, the more likely it is to replicate. If a finding is breathlessly reported, and then nullified or reversed by further work, this undermines confidence in the whole of science. Trying to communicate the shakiness of an intriguing finding, frankly, just doesn’t work. As a story is repeated through the media, the layers of nuance fall away and we are left with bold, attention-grabbing pronouncements. Researchers and university press offices are as much to blame for this as reporters are. Ultimately, we also have to account for the fact that most people think about things they’re not directly concerned with in terms of “true”/”not true” rather than shades of probability. (I mean, as a non-paleontologist I have picked up the idea that, “oh, so T-Rex has feathers now” when it’s a more complicated story.)

Rjpalmer_tyrannosaurusrex_(white_background)
We’ve seen the bunny, now for the chickie.

I make these recommendations fully aware that our current communication system will have a hard time carrying them out. Let’s take the first distinction, between no threshold and alpha = .05. To be a “specialist” journal, judging from the usual wording of editors’ rejection letters, is to sit in a lower league to where unworthy findings are relegated. However, currently I don’t see specialist journals as any more or less likely to publish null findings than more general-purpose psychology journals. If anything, the difference in standards is more about what is considered novel, interesting, and methodologically sound. Ultimately, the difference between my first two categories may be driven by researchers’ attention more than by journal standards. That is, reports with null findings will just be more interesting to specialists than to people working in a different specialty.

On to the public-facing threshold. Here I see hope that textbook authors and classroom teachers are starting to adopt a stricter evidence standard for what they teach, in the face of concerns about replicability. But it’s more difficult to stop the hype engine that serves the short-term interests of researchers and universities. Even if somehow we put a brake on public reporting of results until they reach a more stringent threshold, nothing will stop reporters reading our journals or attending our conferences. We can only hope, and insist, that journalists who go to those lengths to be informed also know the difference between results that the field is still working on, and results that are considered solid enough to go public. I hope that at least, when we psychologists talk research among ourselves, we can keep this distinction in mind, so that we can have both interesting communication and rigorous evaluation.

Advertisement

One thought on “Justify Your Alpha … For Its Audience”

  1. I hope it’s okay for me to present an idea/format that aims to incorporate many of the issues you mention in this blogpost and tries to find a balance between many of them. For instance:

    # finding a balance between “creativity” and “rigor”
    # finding a balance between “original” work and “replications”
    # incorporating both “significant” and “non-significant” results in publication
    # resulting in the combining of multiple p-values meta-analytically / not focusing on single p-values
    # resulting in multi-study papers which both have “conceptual” and “direct” replications

    http://andrewgelman.com/2017/12/17/stranger-than-fiction/#comment-628652

    Should any of it make any sense, and you think it might be useful in some way or form to possibly further think/discuss/write about please i hope you will do so!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: