I've seen a lot of news about the LA study, and the error bars of that study may be seriously flawed as well. It is plausible that the data in both the Santa Clara and LA studies are consistent with extremely low prevalence in the study populations.
Does this mean the authors are wrong and the disease is deadly? No, not at all. I really hope the conclusions in these studies are correct and we can all stop being afraid of this virus. This analysis suggests that we don't know enough yet to confidently claim a huge undercount.
The errors this time have less to do with methodological errors but instead, are due to their claims about method specificity. In the preprint, they state that the manufacturer of the test evaluated 371 negative controls (of which 369 were classified accurately).
This is misleading. I found the manufacturer data in the package insert attached here ( https://mms.mckesson.com/product/1163497/Premier-Biotech-RT-CV19-20) and the 369/371 specificity figure comes from the IgG test. The IgM antibody test only correctly classifies 368/371 of the true negatives.
In the Stanford study, a person is classified as positive "by either IgG or IgM." But then they only included the IgG false positive rate, and failed to include the IgM one. Given that they treat either test as positive, including both *doubles* the false positive rate.
Since it is not specified whether the false positives in the IgG or IgM tests overlapped, it is possible that only 365 / 371 of the true negatives were accurately classified. With this revised specificity, the 95% CIs on the Santa Clara county study prevalence easily include 0%.
In the LA study, they avoid making any methodological errors by simply assuming the test is 100% specific (this information comes from the since-removed summary of the LA county study hosted on http://redstate.com ). However, I initially didn't think there was much to be...
worried about because the observed prevalence was so much higher (35 observed positives out of 863 samples). Even accounting for the 365/371 specificity rev., the data appeared consistent with very high prevalence in the study population (albeit larger error bars than reported).
On that same McKesson page advertising the test for purchase, however, there is also an independent evaluation of the antibody test from the Jiangsu CDC. In their evaluation, out of the 150 samples in their negative control, only 146 are observed as negative.
The resulting ~97.3% specificity means that the error bars provided in the LA study are far too small. Running my code, @graduatedescent's parametric bootstrap code, or the Bayesian model posted by Ethan Steinberg yield CIs for study prevalence with a lower bound at or near 0%.
What's the takeaway? Given the available data on the specificity of the test used in the Stanford/USC studies, the antibody prevalence data collected does not substantiate the 20-85x undercounting claims presented by the authors.
The last thing I'll say is that unlike a lot of the amazing people who post on these topics (or the authors of this study), I don't consider myself an expert. But I hope there's a role for all of us to carefully evaluate research presented and kindly point out possible errors.
You can follow @jjcherian.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: