This past week I have been putting together notes taken by a couple of very helpful audience members (Michael Wohl and Jessie Sun, thank you!) from our Editors’ Panel on stats and reporting controversies last weekend at the SPSP conference. The panelists also helped to reconstruct the dialogue. It’s not a 100% accurate transcript but it gets the point across.
Replies follow the original questions, taken from their social media context (SPSP forum and Facebook). Post-hoc comments from myself and the other editors are in italics.
Roger Giner-Sorolla (RGS), Richard E. Lucas (REL), Simine Vazire (SV), Duane T. Wegener (DTW) (respectively Editors-in-Chief of JESP, JRP, SPPS, PSPB)
SV – it is the way people think about their results that matters. The way bootstrapping for mediation is often currently used in S/P psychology shows that people do use dichotomous thinking. We need to think more flexibly. [I think I mentioned this in response to another question, but CIs also help a lot when interpreting null results – are they really conclusive that there is a close-to-zero effect, or does the CI also include practically meaningful effects? -SV]
DTW – Using p values appropriately in a continuous and more nuanced manner is helpful.
REL: One goal of encouraging CIs is changing attitudes and how we think about and write up our results. Even if CIs and p values are related, CIs encourage an appreciation of uncertainty.
RGS – Agree with the above, and CIs can be hard to interpret in basic research. CIs become more meaningful when the number has more meaning (for example, effects on GPA, earnings in money terms, weight loss, etc.).
REL – We need to keep the spirit rather than the letter of these changes in mind and build in flexibility.
SV – The N issue needs to be noted and discussion of the results more tempered. Doesn’t know an editor who has imposed new standards and not provided exceptions. For example, considering how difficult it would be to collect a large sample for a specific population, we have to be flexible. But if people doing hard-to-collect populations can still make the effort to collect adequate numbers, then we should expect more from those with easier to-collect data.
RGS – N is not the only way to increase power; increasing reliability or precision of methods are other ways. About the second part of the question, too, there is a common misperceptions that pre-registration ties your hands. It just allows the researcher to draw a distinction between confirmatory and exploratory analyses. Exploratory analyses (those not in the pre-registration) can be reported, and those evaluating the manuscript should not just dismiss them, but look at the strength of evidence in the context of how many tests could have been tried.
DTW – There will always be nuanced boundaries in what is exploratory vs. confirmatory. Often, exploratory ideas can become confirmatory tests in subsequent studies.
REL – Typically those studies are very well documented. You can point people to a website in the manuscript.
SV – The authors should report the other variables that they did consider. Disclose in a footnote or short description in the main text. Authors could also show how the results look according to a set of specifications you did consider as well as other sets you could have considered. Show representative results, not the most beautiful results you could have gotten from the data set. A good practice might be to present your most damning result, as well as your most beautiful result. [For more info on specification curves, see this paper by Simonsohn et al., under review: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2694998 -SV]
RGS – There is a difference between the letter and the spirit of the rule. In this case you can make your own call about what variables are relevant to the hypotheses to disclose in the article, but as mentioned, being able to verify these with a posting of the larger data specification might be important for people evaluating this.
SV – We will at the end of 2016 ask people after publishing if they will post their data and give them a badge if they do. We don’t know much about the effects going forward (e.g., submission rates), but I don’t think it will negatively impact submission. We will just give a badge to those who do share. Whether you’re willing to share your data won’t affect decisions at SPPS.
DTW – We do not have badges, but we ask that people abide by guidelines. Also, how people deal with the data that’s been shared with them can have a chilling effect or an engaging effect.
REL – The more people do this the more evidence will accumulate. It is good to share, but we understand there are times this is not possible. We should allow people an opportunity to have people explain why they are not sharing. One of the nice things about trying a lot of different things is that it’ll start to show us what works.
RGS – We don’t have badges, but it is on the to-do list. We have checkboxes that people will keep raw data for 5 years and that APA guidelines have and will be followed. Also, as a field, if we want open sharing of data, we need to communicate this to IRBs and make sure the consent forms are compliant with open data. Otherwise there’s no way you can legitimately put up data.
SV – We need to have open discourse about change. If there’s a clear sense of what members would like that’s a big part of decision-making when policies are set. Express your preferences, vote with your submissions.
Context note: The bias analysis and rankings refer to the R-index, which is a comparison of studies’ post-hoc power with the proportion of significant results; low rankings mean there is more reporting of significant vs. nonsignificant results than chance would indicate.
RGS – I would like to see more testing and generation of methods (like Erika Salomon’s poster at SPSP) to determine whether bias analyses are accurate at small numbers of studies and which ones work best. The file drawer issue is an important one. Currently the above usage of the R-index is based on all statistical tests and not just focal ones, so one caveat is that it may be picking up reporting styles not related to selective reporting. Journals with more personality research also seem to be doing better in the above ranking subset. That said, and not to be too defensive, my new guidelines at JESP are aiming at reducing selective reporting by encouraging a “big picture” approach to p values.
REL – We likely would not do formal analysis for this. I would tell the authors that you have poorly powered studies [RGS: this is based on the assumption that most bias tests, directly or indirectly, reveal an unusual preponderance of significant p-values that are high and close to .05 REL: Yes]. We should not have to rely on assumptions about these things. We can deal with this without casting aspersions. [REL: I do wish I had said that much of the debate about these methods seems to be just how certain we can be when the techniques are used. However, I think there is less debate that much of the logic behind them is sound, and that in the situation described in the question, this is evidence that we should be cautious in our interpretation. And I think an editor can use that to ask for more evidence, even if in post-publication peer-review, we might be cautious about the accusations we make about the author.]
DTW – We have to also look at the claims being made in the paper. Are you arguing in a way that you have a large effect? If the effect size estimate is the main claim, you want to make sure it’s as accurate as possible. Are you making a directional hypothesis? We should expect and value variability across studies.
SV – if I see a series of underpowered studies, I will ask for a high power study or one with pre-registration. The vast majority of the desk rejections at SPPS are due to underpowered studies.
RGS – It looks bad on our field if we are not reporting studies that don’t work. It’s an ethical imperative that has been supported by the APA [RGS: expect a blog post on this little-known fact but I also reference it in my editorial]. I would say that it’s not a robust finding when all ps are just below .05, and we might call for a new pre-registered study. Why does marginally significant only go one way (e.g., when p = .06, but not when p = .04)?
DTW – We expect variability across studies. We need to be comfortable with that and interpret that variability. We’ve been trained to analyse single studies, but not a set of studies.
DTW – It depends on what you attribute the cause of reproducibility (or lack there of).
SV – I think this should be an important goal for our journals – most of our studies should replicate. The goal should not be 100%, but if we say we don’t care about our findings being replicable, we’re in trouble. Of course we need to be sensitive to whether the replication differed in important ways from the original, but we should try to specify the necessary conditions a priori. We allow authors to provide supplementary material. We should be able to publish things in such a way that the study could be replicated. Authors should be noting if their manipulation doesn’t work in a given context and why, so that if there is a methodological feature that is key for replication, it should be stated a priori.
RGS – Reporting more aspects of the study is of import. Doing so should result in improved reproducibility. Pre-testing and calibration is important in original research as well as in replication research.
REL – What’s important is getting more data on systematic attempts.[REL: I also think that things may change slowly, and we shouldn’t be discouraged if a 2015 report was not much different than a 2013 report.]
REL – Things have been messy for a while at JRP [due to the nature of the research we deal with]. I haven’t noticed any changes in reaction, but I think we are perhaps more open to messy data and will continue to do so.
DTW – Perhaps a small increase. We need to be comfortable with messiness. Although each study may not be clear individually, alongside other studies the results may be clearer.
SV – We need to appreciate variability and messy results more. But sometimes there’s messiness from when studies are not well-done (e.g., low power). In that case, I wouldn’t be convinced by a meta-analysis of the studies. Sometimes it’s easy to collect more data to get a conclusive result in either direction. This is where thinking about CIs helps a lot. The question is: Do we know more now than we did before we read the paper? If the answer is no, and it would be relatively easy to collect more data to get more certainty, then it’s still reasonable for editors to want a clear, overall conclusion.
RGS – You can do an internal meta-analysis or a Fisher’s test to aggregate ps when summarizing “messy” data. So far, I haven’t seen the amount of “mess” I’m looking for in the 45 or so manuscripts this year, but, with time, hopefully. [RGS: I should also add that among the articles I have seen so far, people seem to be addressing the “mess” issue the other way, with big samples, and yes, most of them are crowdsourced. See Patty Linville’s question below.]
Q and A from the audience at SPSP
Alex Rothman: Let’s specify some hypotheses right now. What do you think the outcomes on scientific behaviour will be in five years’ time? We can treat the situation like a natural experiment (journals are doing different things). It would be great to have some evidence for what happens.
SV: The goal for the next reproducibility project would not be 100% reproducibility, but to set a standard for what we would like it to be. In the end, something like that has to be one of the main goals of science—to produce a body of knowledge that has some amount of certainty. It’s great to think of it as an empirical question to try to make predictions. Hopefully the reproducibility will increase over time with what we are doing.
DTW: It depends on what was causing problems in the first place. Right now we still don’t know for sure what that was.
SV: It’s true that these new policies are changing the rewards. But, the acceptance rates are staying the same. There are always some people getting rewarded for some practices.
Chris Crandall had a comment that we found difficult to remember but he graciously provided a version of it for us here.
Many people in the “reform” movement, are not using the phrases “context of confirmation” and “context of discovery,” or their statistical counterparts “exploratory” or “confirmatory” data analyses in a correct fashion.
There is no hard distinction between the two in any philosophy of science that I know (that philosophers still endorse). What people mean when they say these words is this:
“The truth value of the data is dependent upon the state of mind of the researcher prior to conducting data analysis.”
That is, what you thought you were looking for, before you saw the data, determines what you can conclude from the data (which, of course, exists independently before you analyze anything).
I suggested that what people really care about is “how long were you digging into the data before you found the results you’re reporting?” This is really something quite different, and I proposed that we need a metric, and a way to report, how deep into the analysis we got before we developed a particular statistical model. This will not be easy, because a well-fitting statistical model is like a ring of keys–we always find it at the end of the hunt, because we stop the search at success.
Patricia Linville: Do you feel that we will be a field that only does research on MTurk to get up our N to meet the new standards? I would have more faith if we see studies in a paper that gets their sample from a few different places.
REL – JRP is not finding an increase in large student [or MTurk? RGS] samples. Those that just report correlations among [self-report measures] get desk-rejected. [We do have more studies that use internet samples, but this is inclusive; it is not just MTurk, but also studies where people use a variety of methods to recruit participants and have them participate online (including on-line panels).] It’s also important to track these side-effects over time. At least at JRP, we haven’t had these negative side-effects but we should pay attention to & track it. We are tracking sample size over the years as well as the use of method over time.
DTW – I share the concern. There will be intended and unintended consequences. On MTurk we now have “professional” participants and there could be unintended consequences there.
SV – This is where flexible standards come into play—I think we should have affirmative action for multi-method, intensive methods.
In future blog posts I will try to address some of the questions from Facebook that didn’t get answered in the panel. The serious ones, anyway.
One thought on “SPSP Editors’ Forum: Summary”