This is very important. And it applies to any predictive model, not just COVID tests. Models with 95 or 99% & #39;accuracy& #39; can still produce many false positives/negatives.
1/n https://twitter.com/TomChivers/status/1247449207956537345">https://twitter.com/TomChiver...
1/n https://twitter.com/TomChivers/status/1247449207956537345">https://twitter.com/TomChiver...
This figure nicely illustrates the problem (source: https://medium.com/wintoncentre/the-problems-of-a-bad-test-for-covid-19-antibodies-dbe169f2a11b).
If">https://medium.com/wintoncen... 5% of the population has actually had the virus, a quite & #39;accurate& #39; test (80% sensitivity, 98% specificity) will still give >30% false positives.
2/n
If">https://medium.com/wintoncen... 5% of the population has actually had the virus, a quite & #39;accurate& #39; test (80% sensitivity, 98% specificity) will still give >30% false positives.
2/n
This thread provides a great explanation based on Bayes theorem. Because Pr(test+ | being+) ≠ Pr(being+ | test+), highly & #39;accurate& #39; tests/models (>90% sensitivity & specificity) can produce many false positives/negatives, depending on prevalence. 3/n https://twitter.com/taaltree/status/1248467731545911296?s=20">https://twitter.com/taaltree/...