I will offer an olive branch. There is a lot of value in incorporating context and prior judgment or "subjectivity" in statistical models (that's why being Bayesian is so great!). A lot of what Dave says in this thread is reasonable. Let me just respond to add my point of view. https://twitter.com/Redistrict/status/1299033643529236480
I want to avoid being argumentative here. Hopefully, in exposing people to both points of view, we can learn something from the disagreements between our methods.
First, just to quantify differences and put them in context, let's say Dave gives Biden a roughly 67% chance of winning today (between hsi 60 and 75). Our model has him at 88%. That's a big difference! However, I think it's actually pretty easy to explain.
Starting off: We both have data that is good for Biden. Trump's approval is low and very stable. Incumbents typically get blamed for a bad econ. Polls have been at Biden +9 points for months, and polarization makes them more predictive. The DNC/RNC probably won't matter. etc.
We start to diverge from here. Dave looks at these data and perhaps thinks "Well, they are good for Biden, but I think they're only reasonably predictive, and chaotic things can happen, so I'm going to give them some but not a whole bunch of weight."
He then balances the actual data about the race against a prior, which is that polls were wrong in 2016 and that Trump has a good stock of votes left among non-college whites in the Midwest that can help him make up ground. That's fine and totally valid! But is it probabilistic?
No, I don't think it's the _most_ probabilistic or predictive way to model the race. What I think is happening is that a lot of pundits and analysts, Dave included, are focused on the error term in the model. What if the economy ISN'T predictive? What if polls ARE wrong again?
Those are good questions! We just think that models are better suited to answer them than subjective assessments of the race (that, tbh, appear to be heavily swayed by recency bias and some heavy analytical paralysis—too much data can lead inference to tend toward 50-50).
The history suggests that polling averages usually aren't biased in the same direction from election to election. If we were to put that into our model, it would be worse than guessing.

However, a tangential point: what if Dave is accounting for the possibility of shy Trumpers?
That would certainly push his estimates down. Now, again, we don't find evidence for this so it doesn't go into the model, but you get the point I'm making. Dave's analysis is driven by a lot of judgment, which is reasonable and theoretically statistical, but perhaps flawed.
So, this is where our model comes in. We think that it is BETTER to rely on a set of historical data to guide us about the reability of your data or certain narratives. Indeed, election models have beat "qualitative" analysts in the past for exactly this reason.
What we're doing with our statistical model is comparing data today with a historical stock of data and ask the question of how often candidates with numbers like we see now have gone on to win the election, and quantify the error term associated with these relationships.
Instead of thinking "well, polls have been wrong before" or "well, there are lots of non-college whites in WI," our model tells us HOW wrong polls have tended to be and how likely WWC voters are to change the trajectory of the race. Quantitative analysis... helps you quantify!
Now, maybe you have more information that you want to include in your model. We, for example, tend to focus on how paralyzed polarization has caused the electorate to become. As far as I can tell, Dave is ignoring that in his formal model (& so are other modellers).
This leads to people seeing huge movement in race such as 1976 and 1988 and thinking "oh, polls can move around a lot." Well, (a) that probably skews our judgment of the variance in the polls already, AND that movement has gotten smaller over time. That increases Biden's chances.
Now, on the other hand, maybe you want to include some prior information about polls missing WWC voters, or about Trump having some backlog of support for them. Cool! You can use that information statistically if you also have a sense for the uncertainty around it.
Setting aside that I don't think this is the right call to make, if you wanted to say that Kenosha or the RNC makes Biden-voting non-college whites more likely to desert him, maybe by 3 points, +/- 5, you can use that information in your model. Dave's mental model is doing this.
And, in this case, that's one of the reasons why Dave is closer to 67% Biden than 88%.
OK, that's the stuff I think we can learn most from. But for the sake of posterity, I also think there is quite a lot of data and thinking going into Dave's mental model that is more toward the TV pundit analysis spectrum of handicapping than his usual high-quality work.
Dave says that voters have short memories and so might stop punishing Trump for covid (fine), and that our model doesn't account for that (wrong). By way of modeling historical poll dynamics, the model knows average voter "memory" over the past 18 cycles. https://twitter.com/Redistrict/status/1299034644139175936
Dave could maybe say that voter memory is shorter now and therefore swings in polls are likelier, but this misses two broader points that (a) it's partisanship that is driving the majority of voter choice, and (b) that polarization has made swings LESS likely. It's not a true pt.
There's also this case that Trump can influence the media in a way that decreases voters' opinions of the other candidate. But that's a n=1 case that probably isn't predictive, as Joe Biden is no Hillary Clinton — and Trump has been attacking Biden for months with 0 change.
And then there are three points about a "plausible" 5 percentage point gap in the EC, a 4% chance of a tie or that Trump steals 6pts of Biden vote margin via postal trouble, which we have debunked often and empirically. See https://twitter.com/jipkin/status/1299045353694597122?s=20 and https://www.economist.com/united-states/2020/08/22/more-mail-in-voting-doubles-the-chances-of-recounts-in-close-states
So, to recap

Models that incorporate external information about the world r good. We like priors too! We just think that looking at data is a little more valuable than Dave does, & that his analysis is probably a bit contaminated w some of the classic probabilistic heuristics.
You can follow @gelliottmorris.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: