APA 7th, part I: Improvements in Data Transparency

Hi Ma! I’m in the APA 7th Edition publication manual.

apa 7

Thanks to Rose Sokol-Chang, Anne Woodworth and Emily Ayubi, I had a chance to give my input to a number of issues while it was in production.

For one, I’m happy that in line with my suggestions, they sharpened up the conflict of interest definitions (section 1.20, p. 23). Financial interests, which can involve authors, editors or reviewers, are distinguished from personal connections, which are of concern to editors and reviewers handling authors’ work. Differences of opinion are not per se conflicts, but advice is given to handle this issue objectively. Editors are encouraged to set policy on such issues as the time frame beyond which a past academic relationship is no longer a conflict.

Also, the new edition, with my input, clarifies the ethics of data sharing in previous APA policy (section 1.14, pp. 13-16). Let’s talk more about that.

For science to be self-correcting, scientists have to be able to check the data that published conclusions are based on. At the same time, we also want to limit the vexatious abuse of this process, and to give authors confidence that the data will only be used for checking the accuracy of their article. These principles guided me in advising how the policy could be improved, inspired by breakdowns in function of the previous policy and by experience with controversial cases as an editor. Here are some highlights that made it to the Manual.

  • Who counts as a “qualified professional?” (Accountability matters.)
  • What are reasonable uses of data to confirm the conclusions of the original paper? (Focus on verifying the conclusions, including the possibility of approaching them with alternative analyses).
  • What are special considerations when dealing with proprietary data, and data based on proprietary methodologies? (Editors need to decide how they act when they get a paper that would not exist if the data had to be shared).
  • How are the costs of data transmission to be assessed? (According to a reasonable local rate that covers the actual job.)
  • What are the consequences of failing to share data on the request of an editor? (Potentially, retraction or expressions of concern about the article).

The policy also asks the requester more specifically to bear the costs of sharing data over and above any preparations needed to make the data usable internally within an institution. In other words, it should always be the researcher’s responsibility to keep their data in good shape in case they get hit by a bus or decide to enter a hermitage. You, or I, may be slack on this responsibility at any given moment, but that doesn’t change whose responsibility it is.

What would this mean in practice? Well, if I ask for a data set, and you charge me $100 an hour to have someone change the variable name into something that matches up with the analyses you reported, that is just something you should have done in the first place. If I ask for you to translate open-ended responses from your source files from Chinese into English, that’s a different story. It’s more justifiable that I should be stuck with the bill, or just find someone to do the translation on my end.

Two more improvements I wasn’t involved in: encouragement for editors to adopt open data policies and data badges, and a section on sharing qualitative data. Overall, this expanded section gives guidance to make science published under APA rules more reliably transparent.The other section of the Manual I was consulted on concerned the implications of APA’s long-standing ethical policy about full reporting. The uptake of that advice in the final manual is more complicated, and leads me into some thoughts about how standards can be enforced in publishing. That’s another blog post, coming soon!


APA Ethics vs. the File Drawer

These days, authors and editors often complain about a lack of clear, top-down guidance on the ethics of the file-drawer. For many years in psychology, it was considered OK to refrain from reporting studies in a line of research with nonsignificant key results. This may sound bad to your third-grader, to Aunt Millie, or to Representative Sanchez. But almost everyone did it.

The rationales have looked a lot like Bandura’s inventory of moral disengagement strategies (pdf): “this was just a pilot study” (euphemistic labeling), “there must be something wrong with the methods, unlike these studies that worked” (distortion of consequences — unless you can point to evidence the methods failed, independently of the results),  “at least it’s not fabrication” (advantageous comparison), and of course, “we are doing everyone a favor, nobody wants to read boring nonsignificant results” (moral justification).

Bandura would probably classify “journals won’t accept anything with a nonsignificant result” as displacement of responsibility, too. But I see journals as just the right and responsible place to set standards for authors to follow. So, as Editor-in-Chief, I’ve let it be known that JESP is open to nonsignificant study results, either as part of sampling variation in a larger pattern of evidence, or telling a convincing null story thanks to solid methods.

That’s the positive task, but the negative task is harder. How do we judge how much a body of past research, or a manuscript submitted today, suffers from publication bias? What is the effect of publication bias on conclusions to be drawn from the literature? These are pragmatic questions. There’s also ethics: whether, going forward, we should treat selective publication based only on results as wrong.

Uli Schimmack likens selective publication and analysis to doping. But if so, we’re in the 50-year period in the middle of the 20th century when, slowly and piecemeal, various athletic authorities were taking first steps to regulate performance-enhancing drugs. A British soccer player buzzed on benzedrine in 1962 was not acting unethically by the regulations of his professional body. Imagine referees being left to decide at each match whether a player’s performance is “too good to be true” without clear regulations from the professional body. This is the position of the journal editor today.

Or is it? I haven’t seen much awareness that the American Psychological Association’s publication manuals, 5th (2003) and 6th (2010) edition, quietly put forward an ethical standard relevant to selective publication. Here’s the 6th edition, p. 12. The 5th edition’s language is very similar.


Note that this is an interpretation of a section in the Ethics Code that does not directly mention omission of results. You could search the Ethics Code without finding any mention of selective publication, which is probably why this section is little known. Here’s 5.01a below.


Also getting in the way of a clear message is the Publication Manual’s terse language. “Observations” could, I suppose, be narrowly interpreted to mean dropping participants ad hoc from a single study just to improve the outcome. If you interpret “observations” more broadly (and reasonably) to mean “studies,” there is still the question of what studies a given report should contain, in a lab where multiple lines of research are going on in parallel. There is room to hide failed studies, perhaps, in the gap between lines.

But I don’t think we should be trying to reverse-engineer a policy out of such a short a description. See it for what it is: a statement of the spirit of the law, rather than the letter. Even if you don’t think you’re being “deceptive or fraudulent,” just trying to clarify the message our of kindness to your reader, the Publication Manual warns against the impulse “to present a more convincing story.” There can be good reasons for modifying and omitting evidence in order to present the truth faithfully. But these need to be considered independent of the study’s failure or success in supporting the hypothesis.

One last numbered paragraph. This is the relevant section of the Ethical Principles (not the Publication Manual) that authors have to sign off on when they submit a manuscript to an APA journal.


What would be the implications if the APA’s submission form instead used the exact language and interpretation of 5.01a from its own most recent Publication Manual? Explosive, I think. Using the APA’s own official language, it would lay down an ethical standard for research reporting far beyond any of the within-study reporting measures I know about in any journal of psychology. It would go beyond p-curving, R-indexing and “robustness” talk after the fact, and say out loud that file-drawering studies only because they’ve failed to support a hypothesis is unethical. Working out reasonable ways to define that standard would then be an urgent next step for the APA and all journals who subscribe to its publication ethics.