A good faith discussion would start with a discussion of models versus model parameters, so let’s do that! https://twitter.com/JohnCornyn/status/1248581686352318464
A model is generally a mathematical and/or computational method that is used to estimate future results. There are a lot of things that go into creating a model. One is figuring out what the underlying behavior is.
So, for instance, epidemiologists who have studied outbreaks have determined that historical patterns of outbreak infection fit an exponential curve. There is a lot of explanation as to why they do, but we know that historically they do.
(After further consideration, I decided not to put equations in here.)

Here are two exponential curves. They are not the same curve. They are both exponential curves.
We know the basic model for outbreak is exponential (up until a point). And look, with social distancing and lockdown as currently being practiced in the US, the basic model is *still* an exponential. It’s just a less steep exponent.
We know, at this point in the COVID19 model, exactly how steep the exponent is in unchecked transmission. That’s because we’ve basically measured it a bunch of times in separate countries.
What we *don’t* know with any precision, and won’t know until we are in the thick of it, how steep the exponent will be in US currently-practiced social distancing models.
You can’t extrapolate simply from other countries because there are a ton of factors that come into play: how easy it is for people to get home deliveries, how compliant people are, what common practices there are around hygiene and mask wearing, and so forth.
So people make their best guess. It’s a new virus, and these are new circumstances. We don’t know how people are going to respond. So they make a guess, and they try to make it as educated as possible, but it contains a lot of uncertainty.
This uncertainty can often be represented as error bars, but the thing to keep in mind is that the uncertainty is in an exponent multiplied over time.
As a totally made up example, let’s say our exponent over time looked like (sorry here are equations)

f(t) = 2^(ct)

You might be fairly confident that if t is measured in days, c is between 1.95 and 2.05.

That’s an uncertainty of 0.10, which doesn’t sound like a lot.
But that means that after 7 days you have a lower bound of 12,854 and an upper bound of 20,882. That tiny difference is almost a factor of 2, and that’s after 7 days.
When people update model predictions, you’re seeing two things.

(1) Changed behavior results in different outcomes, which they are now modeling.
(2) They are updating model parameters to be more accurate to observed data.
And remember that epidemiologists in the US are operating at a *massive* deficit, because we are not testing everyone. We are not testing a portion of everyone.
So the expectation that they would be able to get it 100% perfectly right, when there’s an exponent involved in the uncertainty, and a data vacuum? This is an unreasonable expectation.
If Senator Cornyn wants the most accurate model that we can possibly get, he needs to start screaming about the lack of testing. You can’t expect even slightly accurate models with garbage data.
BY CONTRAST, Cornyn could go look at the climate models, and ask if the type of underlying phenomenon being modeled is exponential, and if so, you can understand that predictions then likely have substantial uncertainty.
You can follow @courtneymilan.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: