I’m glad to see a weak IV paper getting so much attention! Since I know a lot of ppl won’t read past the abstract, I thought it might be helpful to do a short thread on the paper. 1/ https://twitter.com/pedrohcgs/status/1316222731491385351
The headline number that’s gotten a lot of attention is F>104. Let me explain what exactly that means.

Suppose we constructed confidence intervals like this:
* If F>Fbar, use betahat_IV +/- 1.96 * se
* If F<=Fbar, set the CI to (-inf, inf)

2/
The paper asks the Q: if I construct CIs in this way, for what threshold Fbar will my CI contain the true parameter at least 95% of the time, regardless of how weak the instrument actually is?

The answer is F-bar = 104. 3/
Of course, this CI will always contain the true parameter when F<=Fbar, since the CI is the real line. The CI will generally contain the true parameter less than 95% of the time when F>Fbar. 4/
So if you only actually use the CI when F>Fbar, you won’t actually get correct coverage in the cases where you actually use it, even if you use FBar = 104! 5/
You might ask: for what value of F-bar will I get correct coverage conditional on F > F-bar?

Unfortunately, the answer is that there is no such value of F-bar! At least if we don’t restrict the strength of the instrument. 6/
This impossibility follows from a beautiful argument by Dufour. Dufour shows that if you want a CI to be robust to weak identification, it must have infinite length with positive probability. 7/
What’s the intuition? Well, if the first stage were actually 0, then the parameter wouldn’t be identified. So your CI would have to contain every possible value at least 95% of the time. 8/
By continuity arguments, the CI must also have infinite length when instrument strength is close to 0.

But normal distributions are “absolutely continuous” -- meaning that if a normal w/one mean assigns probability >0 to an event, so too does a normal w/any other mean. 9/
It follows that, regardless of the actual instrument strength, any CI that is robust to weak identification must have infinite length with positive prob. Thus, no t-based interval can be robust to weak IV! 10/
All of this motivates using a weak-identification robust confidence set rather than relying on a pre-test for the first stage F-stat! 11/
The best-known such procedure is Anderson-Rubin (AR). In the case of a single instrument, AR has some nice properties: it’s robust to weak IV, and converges in prob to the usual interval under strong identification! 12/
This paper proposes an interesting alternative to AR. Their intervals look like t-based intervals, except the value of t depends on the F-stat. Neat! 13/
The authors discuss in the conclusion how in future work they’re going to compare their new procedure to AR more to see which has better power. I’m excited to see that. 14/
In the meantime, I think the first-order thing is that ppl use an inference procedure that is robust to weak IV, be it AR or the new tF method. 15/
To repeat, if I have one takeaway for applied researchers it’s this:

👏Don’t pre-test with F👏.
👏Use a weak IV robust inference method instead.👏 16/16
You can follow @jondr44.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: