Let's talk about tests for a second. When there's a diagnostic test, we look at 2 numbers: sensitivity (how likely is it that someone who has the condition get's a positive result) and specificity (how likely someone who doesn't have the condition gets a false positive).
Obviously, we want to maximise the sensitivity, but that can mean accepting a higher specificity as well. So, for instance, with modern HIV tests, we might have a 99.9%/99.9% test, which means in some populations, the false and true positive rate is roughly the same.
ie, for straight males in their 30's, the incidence of HIV *might* be roughly 1 in 1000 people. If 1000 people take the HIV test, 1 will receive a true positive and one will receive a false positive.
So, if you are a 35 year old straight man who gets a positive HIV result, your chances of having HIV might be 50%.
Now, when it comes to the coronavirus tests this problem is massively confounded. Current newspaper reports say the tests have a 'over 90%' sensitivity, and I'll assume a roughly matching specificity, and be generous and say it's 95% for both
If 5% of the population has, or has had, coronavirus, then for 1000 people tested:

50 false positives
47.5 true positives
2.5 false negative
At 98%/98% with a 10% incidence rate it's not much better!

20 false positives
98 true positives
2 false negatives
This makes the antibody tests effectively useless for deciding who could, say, go back to their job, or even knowing very much about the incidence rates in society in general (*at current levels*)
This is going to be a real issue with the tests the gov has bought, and I hope it's useful framing for anyone reading about those tests or pinning too much hope on widespread testing being a way to get people back to work.
You can follow @felix_cohen.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: