The biggest risk to our future: How measurable benefits justify solutions with non-measurable (or unknown unknown) harms. In particular, well-meaning interventions by well-funded, state-backed, politically-driven organizations with a tendency to double down in order to save face.
Goodhart& #39;s Law says that the measurable (eg. revenue) crowds out the non-measurable (eg. reputation), causing tremendous waste. Here& #39;s its far more evil twin: Blinded by pretty charts, we mess with complex systems, ignoring deniable harms, desperate to show we& #39;re doing something
When the voice that tells you you& #39;re doing right is backed by easy metrics, and the voice that tells you you& #39;re doing far more harm is backed by first principles inference, it& #39;s bad. Add political survival instincts, and there& #39;s no return until the harms are measurable, at scale.
Some examples: The FDA making drug approval incredibly expensive, while also being overcautious with the drugs it does approve, and a multi-layered veto process, means we have multiple layers of compounding costs (in human lives) we never see. https://marginalrevolution.com/marginalrevolution/2015/08/is-the-fda-too-conservative-or-too-aggressive.html">https://marginalrevolution.com/marginalr...
By the way, even if the billions in costs weren& #39;t preventing anyone from developing drugs, and if the process was a flawless utilitarian calculation, simply the time it takes has exponential costs. See: the cost of delaying COVID vaccinations by almost a year. Including variants.
What about interventions to help developing countries? The jury is still out, but have a look at "Harmful aid projects" on this page and scale it to every external intervention onto a struggling (but functioning) society: https://www.givewell.org/international/technical/criteria/impact/failure-stories">https://www.givewell.org/internati...
How about "humanitarian wars"? I don& #39;t know what the latest estimates are for whether interventions in Iraq, Afghanistan, Libya were ultimately a Good Thing (tm) or not. But the fact that it& #39;s not clear two decades later means it can& #39;t have been clear ahead of time, could it?
The effects of the welfare state on the growing class of lifelong recipients, is at the very least understudied. It& #39;s easy to see the payments as helping in the moment. It& #39;s hard to see what a lifelong message of dependence does to the human psyche. https://www.washingtonpost.com/opinions/george-will-the-harm-incurred-by-a-mushrooming-welfare-state/2015/01/21/d8cd15ae-a0d2-11e4-903f-9f2faf7cd9fe_story.html">https://www.washingtonpost.com/opinions/...
How about the situations where humans attempt to violently modify an ecosystem, either by introducing a new species, or by removing one that is there? The results are as scary as they are predictable: https://eandt.theiet.org/content/articles/2018/05/top-10-invasive-species-when-pest-control-goes-wrong/">https://eandt.theiet.org/content/a...
What about the application of European farming methods on the "backwards natives" in almost every other part of the world? Besides the history of famine and collapse, it seems we& #39;re now realizing that there was something to learn there too. https://aeon.co/essays/what-bankers-should-learn-from-the-traditions-of-pastoralism">https://aeon.co/essays/wh...
What do these examples have in common? It& #39;s humans in charge messing with complex systems while using linear thinking. In these cases, the more distant the decisionmaker, and the more powerful, the greater the risk of catastrophe.
Government agencies in charge of safety in any domain, let& #39;s go with FDA, by definition have to make this error not just once, but as an overarching principle of operation, applied at scale. The costs are staggering.
Sadly, things are likely to get worse. If the ability to make decisions for remote systems is a risk factor, technology makes it worse. And if the ability to intervene ever deeper compounds the risk, science makes it worse.
We should have been educating everyone on the dynamics of complex systems from 20 years ago to be in a good place today. We& #39;re now having mounting global challenges, and the best our leaders can do is apply reactive "quick fixes" with ever more power, so long as they get to CYA.
The only alternative that doesn& #39;t end in self-destruction or at the very least human population collapse, is to develop coordination and decision-making mechanisms that consider systems as whole, and rely on constant feedback and adjustment.
I truly hope our vaccines do the trick, but seeing the conversation between @BretWeinstein and @GVDBossche has unsettled me. I have no way to rule out us missing something this basic. But even if we make it out of this, our habit of playing russian roulette can& #39;t end well.
This is why I am hoping to see something like #gameb produce practical proposals. Unless we can make our species anti-fragile, scalable, and omni-win-win, the only alternative is ruin. I hope we get there on time, before a meddling bureaucrat has that one final bright idea.