This is very important. And it applies to any predictive model, not just COVID tests. Models with 95 or 99% 'accuracy' can still produce many false positives/negatives.

1/n https://twitter.com/TomChivers/status/1247449207956537345
This figure nicely illustrates the problem (source: https://medium.com/wintoncentre/the-problems-of-a-bad-test-for-covid-19-antibodies-dbe169f2a11b).

If 5% of the population has actually had the virus, a quite 'accurate' test (80% sensitivity, 98% specificity) will still give >30% false positives.

2/n
This thread provides a great explanation based on Bayes theorem. Because Pr(test+ | being+) ≠ Pr(being+ | test+), highly 'accurate' tests/models (>90% sensitivity & specificity) can produce many false positives/negatives, depending on prevalence. 3/n https://twitter.com/taaltree/status/1248467731545911296?s=20
You can follow @frod_san.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: