(I'm super rushed for time so please forgive any typos or unclear statements.)
First, it's pretty clear that many people don't understand probabilistic forecasts well. That's true about the typical consumers of forecasts (readers) but also elite journalists who confuse 20% with 0. This is probably the leading barrier to extracting the max value from models.
Second, though contradictory, it's still true that presenting people with probabilities makes them better able to gauge the _likelihood_ of candidate A or B winning, relative to an alternative where they're not getting that information. See this chart: https://www.dartmouth.edu/~seanjwestwood/papers/aggregator.pdf
IMO, this actually provides a pretty strong journalistic incentive for forecasting. There are lots of polls out there and we know people think they're better (more certain) indicators than they are. A *good* forecasting model improves coverage. (Of course, not all are good.)
But this is where we run into problems, and the authors' chief claim. They say that forecasts are not just influencing perceptions of the race, but voters' actual behavior. I have a few problems believing that these effects are huge, but it's reasonable to expect non-zero fx ...
... AMONG the people who are exposed to election forecasts. I'm making two claims about external validity of their study here so let me unpack them. First, some context.

The authors use a randomized experiment to evaluate the effects on turnout of seeing probability forecasts.
The experiment splits respondents in two two teams and asks each whether they want to spend $1 out of $15 in their wallet to "vote" for their team to win the game. And, if enough people vote, they gain $2. If the other team wins, they lose $2.
Before being asked whether they would vote for their team or pass respondents were also presented with random odds from 0 - 100. This allowed the authors to see if there was a relationship between the presented likelihood of one's team winning and their likelihood of spending $1.
So let me say up front that I think this is pretty clever. I've designed experiments before and they can be hard, and this one captures many of the steps of the political process that might be relevant to the hypothesis that showing 99% odds for a candidate decreases turnout.
However... is the experiment actually similar to voting? I think the effects in the experiment could likely be larger than real-world effects, chiefly because the risk people associate with their candidate losing is higher than the associated risk of losing $2 in an online game.
IF that's true — and it's certainly an IF, though the authors have noted similar concerns about validity — then the overall implications that forecasting -> turnout or that forecasting -> Trump might be overstated and/or misleading.
A second potential problem with the validity of the study is the question of WHO is reading election forecasts. The effects on turnout are relevant if millions of medium-engagement voters are likely to read them and then decide whether voting is worth their time/effort.
However, if the people primarily exposed to the probabilities from forecasting projects like ours or 538 are higher-SES voters *who are very likely to vote anyway,* then the proposed decrease in likelihood to turnout might not actually tip them into non-voter territory.
OK, so I don't buy the magnitude of the effects they claim — namely, that a 99% Biden forecast would decrease turnout by nearly 10 pts. (Let's also set aside that a good model would only show so high a % in an election where a 10% turnout differential probably wouldn't matter.)
But let me be very clear about something: Even if the turnout penalty is, say, two or three points, then *election forecasts are influencing not only how people perceive the race, but how they act.* And that's a role we need to take seriously.
For example, James Comey says in his book that he only wrote his late Oct 2016 letter to Congress because he was "sure" Hillary Clinton was going to win. Are forecasts responsible for that? Probably not all of it — and, of course, the good ones showed a 70-80% chance not 100%.
Comey is an edge case, of course, but the chance that we COULD have contributed to election-altering actions is important.

Similarly, election forecasts definitely have downstream effects on the media (I know because we decide some coverage based on our model!).
If people at news outlets are buying into forecasts that show 99% for their candidate (rightfully or not) that might lead them to write crappy takes on the election not being competitive that millions of other people see, further mediating a forecast -> turnout relationship.
So while I do think that concerns about forecasts are exaggerated, I also think there is very good reasons for forecasters to be thinking about these problems when they're making their models, deciding how to present them and making coverage decisions based on their estimates.
I do think that the "Election forecasts helped elect Trump in 2016" headline is silly and misleading, and maybe dangerous given how forecasts have helped to correct some of the biases of the media in the past (hello, 2016!). But the spirit of it is worth taking seriously.
But let me emphasize my point about alternatives. If those of us who have the skills don't make good election forecasting models, how bad will be ones be that that spring up from the woodwork. And we've seen repeatedly throughout 2020 that pundits who cover this are bad at it.
In the end, the calculus for me is:

(1) People don't understand the true uncertainty in polls
(2) We can learn a lot about political behavior with social science-y forecasting models
(3) They improve our coverage
BUT
(4) We don't know the scale of their negative consequences
Which leaves me to tweet one more agreement with the authors of the op-ed and motivating journal article, who I respect as good and nice (enough 😉) academics: You should be voting in this election regardless of what our forecasts say.
You can follow @gelliottmorris.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: