When a plan fails, it is often in a way that, post hoc, is clearly explained by "human nature". As @R_Thaler says in Misbehaving: "Compared to this fictional world of Econs, Humans do a lot of misbehaving, and that means that economic models make a lot of bad predictions..."
Yes! But there's also a fallacy applying that here, & maybe a lesson for social science. A plan fails because of one kind of misbehavior, that occurred to a particular extent, in that context. What about all the other ways in which people could have been "human" and weren't?
Just because a prediction that a behavioral scientist is skeptical of fails in a way that makes sense to a behavioral scientist does not mean that the behavioral scientist would have made a better prediction! Prediction in novel contexts is *extremely* difficult!
For example, I would be skeptical of any model that assumes people are well-calibrated regarding risk. But they could be either too sensitive to risk in general, or not sensitive enough in general, or too sensitive to some risks and not sensitive enough to others!
Most behavioral researchers don't do forecasting or prediction, leading to overconfidence in our theories. Many psychologists don't even really try to quantify phenomena in a way that could be an input into a prediction model, focusing on directional evidence/existence proofs.
We talk about this in abstract "philosophy of science" terms -- what does it mean that theories should be falsifiable, should they explain what is not found as much as what is found, etc... But to help inform policy, we need theories that generate improved predictions.
That is a *lot* of hard work when it is even possible, and in most areas of psychology is simply not being done. The initial "proof of concept" insight that something sometimes matters is many miles away from being able to use the insight to make accurate predictions.
Some great policy-relevant work has taken the humbler position that behavioral research should provide *questions* to be tested in field experiments, rather than provide answers (e.g., @R_Thaler & others' work with "nudge units" in govt).
Another way in which behavioral research could be helpful is in pointing out the overconfidence in simple models, helping to be better calibrated about our uncertainty when trying to predict human behavior, rather than about the predictions themselves. That helps policy too!
Ironically, many areas of behavioral research seem to undervalue and even actively discourage exactly that kind of "bad news -- we know less than we think we do" work.
So, yeah, it's arrogant when someone seems supremely confident that their mathematical abstraction will enable them to predict human behavior. We're right to raise a red flag that relying on those predictions, particularly in a novel out-of-sample context, can be very risky.
But most behavioral science folks never put themselves to that test or commit themselves to their own mathematical abstraction, or even make pre-hoc forecasts. So, let's not arrogantly assume that we would have know better or done better, when a forecast turns out to be wrong!
You can follow @OlegUrminsky.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: