In that short piece, Nate tries to convey what he thinks you should understand by 538’s “90%”. But if we’re doing that much work, maybe it’s worth asking what the number is really doing...
Nate tries a few methods. First, the counterfactual (“What if polls are as wrong as 2016?“). Roughly speaking, the idea is to convey that we ought to be more certain now about a Biden win than we were about a Clinton win.
Next, the historical—when have upsets “equivalent in magnitude” to a Biden loss occurred in the past. These turn out to be rare, but not super-rare: Reagan in 1980; polling errors on Clinton in 1996.
Finally, the psychological: “the point, though, again, is that a Trump win is *plausible*.” It’s not quite clear what plausible means except in a non-circular fashion.
So what does it mean for Nate to urge 90% credence in the statement “Biden will win the election”?
One big advantage of Bayesian statistics is that it enables you to reason about the underlying model—by learning the parameters that generate the distribution, you gain insight into the process itself.
But that’s not what’s going on here. Nate is inferring parameters (maybe—it’s not clear what he’s doing, since it’s not particularly open) in the service of making a prediction.
If 538’s final prediction has probability 90%, perhaps what that means is “we don’t really know”. I.e., our model is not rich enough to capture the relevant dynamics.
It’s worth knowing that: that despite all the polls, we still don’t know very much. Of course it does rather put a damper on 538’s project.
I think it’s worse than this. I’m not sure a single 90% prediction has the same epistemic content as one 90% prediction in a series of 100 others. That’s why we’re struggling. https://twitter.com/arnoblalam/status/1322768950896271360?s=20
One answer is betting, of course, but this just sneaks repetition in by the back door: the fungibility of money turns into the fungibility of beliefs.
Don’t everybody laugh at me at once but maybe that’s what I’m very laboriously realizing. https://twitter.com/briantcairns/status/1322771749033922560?s=20
These questions will become more and more relevant, I think, as we use data science to drive policy. What do we do (morally, and epistemically) with a 90% probability on an unrepeatable event? Who’s responsible when it’s wrong? Is anybody? (538 post as basically saying “no”.)
One could say OK, this company sold us an algorithm and when it gives 90% probability on X it’s wrong half the time. But that’s sneaking frequentism by the back door...
What we really care about is the one-off, the unrepeatable, where the conditions do not return. And “degrees of confidence” (relevant to us as moral/epistemic agents) are just not, introspectively, “like” probabilities.
Indeed, perhaps the deeper problem is the way in which Bayesianism makes “degrees of belief” the basic currency to begin with. It’s not that it doesn’t matter, but rather that it might not be what makes induction tick.
If I ask myself whether or not I believe a mathematical conjecture or a scientific claim, I do all sorts of things but I’m not sure I “weigh” the evidence.
We want to be right, of course. The question is whether there’s something meaningful about attaching a probability to an unrepeatable event. https://twitter.com/QuasLacrimas/status/1322775916465061889?s=20
Maybe another way to put it is that Bayesianism formalizes induction but can not capture it. In particular, its internal fluid (probability) doesn’t match the epistemic concepts we have to hand in deliberation.
This is very different from the claim that Bayesianism can’t model cognition, of course. It’s plausible that Bayesianism is an earlier stage in the evolution of mind. But that’s a separate thread, for another time.
You can follow @SimonDeDeo.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: