thread on thread on thread, to add this: people sometimes say we shouldn& #39;t be so skeptical of policy recommendations based on non-definitive empirical studies, because hey, we have to do *something*, and better to act on limited data and theory than to just sit on our hands. https://twitter.com/BrentWRoberts/status/1249364124154441728">https://twitter.com/BrentWRob...
i think this stems partly from many people having an implicit 50-50 prior on scientific claims—i.e., if someone& #39;s thought hard about a problem, and proposed a hypothesis, then you should assume there& #39;s a good chance they& #39;re right, because they& #39;re smart and they know this stuff.
this may seem like a charitable attitude, but it& #39;s very detrimental to science, and possibly worse for policy. some of the problems: (1) most new scientific claims *don& #39;t* have a high probability of being true. if they did, they would probably already have been confirmed/adopted.
(2) even when broad claims are "true", they& #39;re often so vague as to be practically useless. suppose i hypothesize that "stress reduction should improve post-viral outcomes". this seems plausible. i& #39;d give it a > 90% of being right. problem is, i have no idea what to do with it.
for a detailed treatment of this issue, see this preprint: https://psyarxiv.com/jqw35 .">https://psyarxiv.com/jqw35&quo... but the short of it is, the variance in any concrete intervention you design will almost always be swamped by all kinds of factors unrelated to & #39;stress reduction& #39; as you vaguely conceive of it.
so, it& #39;s possible (typical, i& #39;d say) for a nebulous claim like "stress reduction improves physiological outcomes" to be both true and quite useless, policy-wise. what& #39;s useful is showing that *this* particular intervention works for these people, in this context. but that& #39;s hard!
(3) this is maybe the most important point: in the real world, almost every intervention has large opportunity costs. it& #39;s not good enough for a vague directional claim to be right; it& #39;s not even good enough to know that a specific intervention would have a "significant" effect.
you need to know that the net benefits outweigh the costs. this is really hard to do! for one thing, the status quo is generally not a random point in policy space; usually it reflects an equilibrium developed over a long time, as a result of many (often opaque) trade-offs.
i& #39;m not suggesting we already live in a near-perfect world; we obviously don& #39;t. i& #39;m saying we should be very wary of assuming that just because something seems to "work" on paper, dislodging us from the current local minimum has a > 50% chance of making things better.
we see this dynamic right now with hydroxychloroquine. Trump (and to be fair, many others) say "what& #39;s the worst that could happen?" well, the worst that can happen is that you kill people, waste resources, divert meds from other populations, reduce public trust in science, etc.
and that& #39;s a case where the medication exists, is (relatively) safe, and there are plausible biological reasons to think that hydroxychloroquine *should* help. for most social science work, you have to make much bigger leaps to get from theory-in-the-head to practical application
bottom line: real-world policy isn& #39;t a game. the fact that you have a cool study and an intuitive theory and a significant result (or 10) doesn& #39;t mean you should be shouting, in papers or on twitter, that people should pay attention, b/c your work could help the pandemic effort.
yeah, it *could* help. but it probably won& #39;t. and it will probably cost resources to find out that it doesn& #39;t. so show some humility and do the legwork yourself (develop and test actual interventions, do utility analyses, etc.) before you start calling for public attention. /END
You can follow @talyarkoni.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: