There is a common idiocy going on w the #COVID__19 narratives that resurfaces every few days, whereby 2-3 days decreases/dips in cases or death counts are interpreted as 'pass the peak' signs and extrapolated to the symmetric 'bell-shaped curve' expectation of future declines. +
+ So far, I am not seeing it. Around the peak in a number of EU countries (earlier onset counts, so longer data), there is some volatility and downside, but not an 'exponential' decline. In other words, there is no 'sharp fall-off'. This #COVID__19 pandemic ain't going away.+
+ At least not at a rapid rate implied by all the cheerful assertions that once a country is 'past the peak' there is (1) no possibility for a second mode in the distribution, & (2) further declines will be robust. This means that questions about potential resurgence of cases +
+ following a significant moderation remain open. This also means that there is no 'magic' two-weeks thresholds. +
+ An added issue is that we are now getting lagged uplifts in many emerging markets countries, where data is highly volatile, most likely due to lack of tests available. Globally, this means the pandemic numbers are becoming less reliable, not more. Again, this raises big +
+ issues with the narrative that the pandemic-related measures can be lifted any time soon.
+ Now, to pre-empt any complaints: my view is NOT epidemiological. It is simply based on observation of numbers reported and is solely informed by data dynamics. My concern is with a simplistic over-reliance on 'fitted models' that are theoretically plausible, but empirically +
+ error-rich. Scientists use these models with caution & reservations. Media & politicians use them to cherry-pick desired outruns. There is a fundamental difference between these two approaches. That said, I have now seen serious epidemiologists using 5-10 days worth of data +
+ to fit multivariate dynamic equations. The accuracy of this type of an exercise, whether panel- or t-series based (panel cross section is dodgy as hell here), is NIL. It might make for a great intellectual use of Stata or Python, but informationally, this is tenuous.
+ I recently asked one respected epidemiologist for specifics of his data modeling in exactly this context. And I got zilch in response. Not even an 'get back to you'. If I were looking at his work as a referee, I would not even bother recommending major revisions.
+ Again, the problem is not with scientists, but with the way their exercises in data analysis are being used by the media and political decision-makers. Neither the media, nor the politicians comprehend what the hell do the degrees of freedom, say, or threshold effects, etc +
+ have to do with understanding data. So they are free to choose whatever it is they want to choose to report. Cases of absurd bullshit abound. Some U.S. Governors think that past-the-peak is equivalent to safety. Bunch of 'analysts' think that very slow ramp up in cases in, +
+ say, Russia, early on is evidence of 'Government hiding cases'. Other idiots think that once 'EU countries' are past the threshold of double-digits new cases, the 'borders can be opened'. More think that domestic, not global cases, matter more. People believe that some +
+ countries have 'solved' the crisis because they have been showing slower infection rates growth (as if this lengthening of the tail somehow automatically implies a lower peak - hint, it only does so if you assume same total number of cases and no re-infections). +
+ Now, in reality, data and application of the precautionary principle in crisis management suggest that we will NOT go 'back to normal' until we have widely-available (including in the developing countries - good luck with that), effective vaccine with a strong enough immunity+
+ system response to prevent re-infections. The former will require weeks of developing and the latter will need months of testing. And then, stay tight and pray that in the mean time, the virus won't mutate...
+ Here we go, folks. Germany - hardly a country with dodgy stats or governance - for a good part of the week reported '0' cases and '0' deaths in one day. Today, retrospectively revised the number by several thousand. If you took a moving average across that '0' - you had a +
+ a 'welcome' and 'robust' decline in cases and deaths. Time to write an op-ed for some hungry-for-news outfit about how 'containment is working' or 'soon we will see a recovery' or whatever it is you want to write into these numbers. Statistics is a tool. It can be a dumb +
+ bludgeoning one, or a fine Stradivarius. Depends on hands using it. No amount of 'smart' visualizations, data fittings, fancy assumptions and complex models will be able to give you a candy out of garbage data. And we have garbage data, folks. Data that excludes deaths +
+ outside specific settings (e.g. nursing homes & households). Data that can be mis-reported. Data that is sensitive to the days of the week (weekends, anyone? have you morons tested for these?). Data that is simply censored by availability of test kits. Data that is lagged due +
+ to test lags. And on... and on... and on. You take a Lambo and pour in diesel instead of petrol. Good luck outrunning a Yugo off the streetlight.
+ Just to end this (and I can go on): the main point is we need transparent, quality data with reasonable data collection methodologies FIRST. We then need a long enough time dimension to do econometrics/statistics on this data SECOND. And THIRD, we need caution in interpreting+
+ the results. FOURTH, and last, we must use extreme precautionary principle in deriving policy responses based on this data. And a caveat: ALL of this applies to epidemiologists and all other PhDs, not to mention the media.
You can follow @GTCost.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: