Writing this paper substantially changed my thoughts on what research we should be doing as psychologists. Check out @annemscheel’s thread, but, I like this paper so much, I wanted to add some of my thoughts. https://twitter.com/annemscheel/status/1302996065025757185?s=20
Over the years, as I taught people how to improve their statistical inferences, it became clear people are often not yet ready to test a hypothesis. In lecture 1.2 in my second MOOC I ask researchers: Do you really want to test a hypothesis?
We need to force people to ask themselves this, because otherwise, by default, they will come up with hypothesis testing research. It’s what we are trained in from day 1, and we vicariously learn it by reading the literature.
People have been saying we should value ‘exploratory research’ just as much as ‘confirmatory research’. Cortex started ‘Exploratory Reports’ and what do you think? Everyone loves the idea, but almost no one actually submits good exploratory work. Why is that?
I think psychologists do not want to do just ‘merely’ exploratory research, because we don’t know how to value exploratory research. We need to change it from ‘non-hypothesis testing’ to a positive polarity – not defining it by what it isn’t, but by what it is.
This is what we try to do in our paper. For example, we argue for more *parameter-range exploration*. Dose-response curve studies in medicine are an example. Pick a dimension, manipulate the intensity of the IV, and reliable measure the DV.
This type of research provides incredibly important information about what we are measuring, and the relation between constructs. By pointing out its goal, we can evaluate how well such studies are done. And hopefully, more people will do this important work.
The wider the range, and the higher the accuracy, and the more important people think this dimension is, the more valuable the parameter range exploration study. It’s no longer just *merely exploration*. It's a study with a clear goal, that can be done better, or worse.
The goal of this, and other types of research (e.g., naturalistic observation, evaluating the a-priori plausibility of a theory, feasibility and pilot studies) is to *strengthen the derivation chain. This is a second important point of the paper.
We want our hypothesis tests to have consequences. Why do psychologists mainly publish positive results? Because showing a prediction is wrong has almost no consequences. The pushback from reality is not strongly connected to our theories.
There are too many weaknesses in the steps between the theory, and the operationalized statistical test. A null result should have consequences for what we believe. Having a solid knowledge base connecting theories with tests makes tests consequential.
Writing this paper has made me realize not just that I was often not ready to test a hypothesis – it has made me much more comfortable with performing studies where I do not test a hypothesis, and feeling confident these are good contributions to the literature.
For example, in this recent paper on whether reviewers' decision to sign reviews is related to their recommendation, we do not report a single test. But I think the study, consisting purely of naturalistic observation, is super interesting: https://psyarxiv.com/4va6p/ 
We hope our paper on why hypothesis testers should spend less time testing hypotheses makes you feel comfortable to perform different types of studies, that build a solid knowledge base, and that help to strengthen the derivation chain! /end
You can follow @lakens.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: