In the supplement they say 2 out of 371 + 35 known negative samples tested positive. This means that the 95% confidence interval for the false positive rate is [0.06%, 1.77%]. In their samples from Santa Clara County they had 50 / 3,349 = 1.5% test positive.
This means that not only is their data consistent with the reported number of positive cases, but it's also consistent with all of their positives being false positives and there being 0 positive cases in their sample! (I don't think either of these are actually plausible).
It seems likely that we're missing cases, but I'm just pointing out (as others have) that having even a moderate amount of uncertainty in the false positive rate of these tests makes it difficult to get precise estimates of prevalence when the true prevalence is low.
On a technical note, I'm not sure why the confidence intervals in Table 2 are so small -- maybe it has to do with using the delta method (relies on asymptotics) when for the specificity there are only 2 positive tests, but I'm not sure.
You can follow @spence_jeffrey_.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: