The correct answer ( @JosephOChapa) is SURVEY TIMING.

The 2018 survey began on VETERAN'S DAY and low confidence groups were very irregular:

Dems: 91 (vs 79)
"Non-white": 90 (74)
Ages 18-29: 87 (69)

Here are some things you should check when you see poll results, a thread. 1/ https://twitter.com/jimgolby/status/1199688497528999936
QUESTION WORDING: In this case, the question was similar to what see in @Gallup polling, but sometimes questions introduce bias because they prime people to think positively or negatively or give information that may skew their results. When possible, check the question asked. 2/
In this case, it is pretty good: "For each of the following groups or institutions, please tell us how much trust and confidence you have in them?"

There is some ambiguity in what trust or confidence might mean, and people might interpret it differently, but no obvious bias. 3/
RESPONSE OPTIONS: Answers often vary based on the options provided. What choices did people have? How many? Was there a "don't know/no opinion" option or did respondents have to pick?

In this case, responses were "A great deal; Some; A little; Not much at all; & don't know" 4/
There is nothing clearly wrong with those responses, but they are different than the most common responses used by @Gallup: "A great deal; Quite a lot; Some; Very little; None; & No opinion" 5/

These changes lead to different "topline" results. 5/
@gallup combines "A great deal" "Quite a lot" for the number you see: e.g., 73% (2019) but excludes "Some."

The survey I cited combines "A great deal" & "some."

Thus we should expect the 2nd survey to have higher totals & it does: 84% v 73% (Gallup) 6/
https://news.gallup.com/poll/1597/confidence-institutions.aspx
This is a key point: your answers depend in large part on how you define and sort options into your categories. Pay attention to those when you read surveys.

Omission of a "Don't know" or "No Opinion" option can also inflate results. 7/
REPRESENTATIVENESS: How was the survey drawn? Was it a "scientific" or "representative" sample, meaning it was drawn more or less randomly from the population? Or was it a "convenience" sample or "voluntary" survey?

If it's not representative, it is biased by who opts in. 8/
This is why surveys on cable news shows are almost always biased in favor of the audience to which the show caters. People watching who care a lot about the issue are the only people who respond so the results aren't sufficiently "mixed up" to represent the whole population. 9/
You also have to make sure you have the group you "care" about. Do you really want to know what all American citizens think? What registered voters think? What veterans think? If your poll surveys the wrong group, it won't tell you what you want to know. 10/
SAMPLE SIZE: You should also check how many respondents you have. For most surveys, 800-1000 means you can be fairly confident in the results. That gives you a margin of error of ~3%, meaning each number could be +/-% different.

84% confidence could be 81% to 87%. 11/
Most people seize on headlines saying "Y pulls ahead" or "Z commands a majority" but if the poll is 51-49, it might not be a majority. It could range from 48-52 to 54-46 w/ a sample size of about 1000 people.

Those are very different outcomes: DON'T TRUST THOSE HEADLINES! 12/
But can you really tell what the American people think with only 800 people in a survey? Actually, yes, assuming all the other stuff (e.g. representativeness) checks out.

An imperfect analogy is a blood draw. You only need a small sample to tell what is going on. 13/
OTHER POLLS: You also want to compare the poll to other polls. Are the results wildly different? By looking at more polls, it can help you focus on similarities/differences that might explain varying results.

In this case, the 2018 Vet's Day poll was very irregular. 14/
TIMING: This was the main culprit I mentioned in this case. Conducting a poll about veterans on VETERAN'S DAY will skew the results. What events took place? How might that change the results? 15/
SOURCE: Finally, who conducted the survey? Does the organization have a political goal that might lead them to intentionally design a survey with some of the flaws above?

Biased orgs can conduct good polls, but it is worth checking more closely if there is a clear interest. 16/
I didn't go into nearly as much detail or technical issues on most of these topics as I could have, but I think I hit the main topics an average "consumer" of polls should check.

When you know what to look for, it only takes a minute or less to look for these things. 17/
But it can give you more confidence in polling. Most of the time polls aren't "wrong" because they fail to predict a "winner" or "majority" on a close issue.

The people who write bad headlines or assume a poll is more accurate than it can be are wrong. 18/
Anyway, we're going to see a lot more polls in the next year.

It is worth trying to be an educated consumer who knows the basics. It will help you understand what polls can & can't do, evaluate when & why to doubt bad polls, and know when (& how much to trust good ones). 19/19
You can follow @jimgolby.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: