The biggest risk to our future: How measurable benefits justify solutions with non-measurable (or unknown unknown) harms. In particular, well-meaning interventions by well-funded, state-backed, politically-driven organizations with a tendency to double down in order to save face.
Goodhart's Law says that the measurable (eg. revenue) crowds out the non-measurable (eg. reputation), causing tremendous waste. Here's its far more evil twin: Blinded by pretty charts, we mess with complex systems, ignoring deniable harms, desperate to show we're doing something
When the voice that tells you you're doing right is backed by easy metrics, and the voice that tells you you're doing far more harm is backed by first principles inference, it's bad. Add political survival instincts, and there's no return until the harms are measurable, at scale.
By the way, even if the billions in costs weren't preventing anyone from developing drugs, and if the process was a flawless utilitarian calculation, simply the time it takes has exponential costs. See: the cost of delaying COVID vaccinations by almost a year. Including variants.
How about "humanitarian wars"? I don't know what the latest estimates are for whether interventions in Iraq, Afghanistan, Libya were ultimately a Good Thing (tm) or not. But the fact that it's not clear two decades later means it can't have been clear ahead of time, could it?
What about the application of European farming methods on the "backwards natives" in almost every other part of the world? Besides the history of famine and collapse, it seems we're now realizing that there was something to learn there too. https://aeon.co/essays/what-bankers-should-learn-from-the-traditions-of-pastoralism
What do these examples have in common? It's humans in charge messing with complex systems while using linear thinking. In these cases, the more distant the decisionmaker, and the more powerful, the greater the risk of catastrophe.
Government agencies in charge of safety in any domain, let's go with FDA, by definition have to make this error not just once, but as an overarching principle of operation, applied at scale. The costs are staggering.
Sadly, things are likely to get worse. If the ability to make decisions for remote systems is a risk factor, technology makes it worse. And if the ability to intervene ever deeper compounds the risk, science makes it worse.
We should have been educating everyone on the dynamics of complex systems from 20 years ago to be in a good place today. We're now having mounting global challenges, and the best our leaders can do is apply reactive "quick fixes" with ever more power, so long as they get to CYA.
The only alternative that doesn't end in self-destruction or at the very least human population collapse, is to develop coordination and decision-making mechanisms that consider systems as whole, and rely on constant feedback and adjustment.
I truly hope our vaccines do the trick, but seeing the conversation between @BretWeinstein and @GVDBossche has unsettled me. I have no way to rule out us missing something this basic. But even if we make it out of this, our habit of playing russian roulette can't end well.
This is why I am hoping to see something like #gameb produce practical proposals. Unless we can make our species anti-fragile, scalable, and omni-win-win, the only alternative is ruin. I hope we get there on time, before a meddling bureaucrat has that one final bright idea.
You can follow @alexandrosM.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: