Imagine an FDA that did two things:
1) Verify that goods actually contain all and only what the labels say
2) Make recommendations ranging from "knock yourself out" to "you will definitely die if you take this", or even "we just don't know"
FDA is not this way. Why not?
1) Verify that goods actually contain all and only what the labels say
2) Make recommendations ranging from "knock yourself out" to "you will definitely die if you take this", or even "we just don't know"
FDA is not this way. Why not?
The usual reply is that the FDA is incentivized to avoid high-profile catastrophes like thalidomide. Patients that die for lack of a cure are not a natural, identifiable constituency in the way that victims of adverse reactions or quackery are.
Not only do these incentives lead to far less than the socially optimal level of drug risk but, maybe even more perversely, they encourage manufacturers to obfuscate the desirable qualities of their own products, for fear of making illegal claims.
Thus the wink-nudging about hand sanitizer and mask marketing that you would have to be on about seven levels of DC kremlinology in order to understand.
Like imagine being a normie just trying to buy some purell, and having to decipher: "WARNING: THIS PRODUCT CONTAINS 70% ETHYL ALCOHOL BUT DEFINITELY DOESN'T DO ANYTHING, OK??"
Looking at the history of the org, seems like FDA starts out with a pretty reasonable mission: going after stuff that's contaminated with a known poison / pathogen that nobody wants. Soon extends to banning quack cures, and 100 years later it's fighting masks during a pandemic.
How do you give FDA better incentives?
The hunch I keep coming back to is Information Elicitation Mechanisms: protocols that squeeze the truth out of people.
The hunch I keep coming back to is Information Elicitation Mechanisms: protocols that squeeze the truth out of people.
Information elicitation mechanisms are all around you in various guises. Some examples:
- Betting
- Error bounties
- Auctions
- Prediction markets
- Cut and choose algorithms
- Separating suspects before questioning them
I don't think we have a complete theory of this yet.
- Betting
- Error bounties
- Auctions
- Prediction markets
- Cut and choose algorithms
- Separating suspects before questioning them
I don't think we have a complete theory of this yet.
The thing people want out of FDA is a trustworthy answer to the question: "is it safe to take this [relative to my other options]"
And the solution we've settled upon is for FDA to make it illegal to take certain things, including almost all new things.
And the solution we've settled upon is for FDA to make it illegal to take certain things, including almost all new things.
If you look closely you will realize that that's *not even an answer to the question at all*
an actual answer would be: "yes / no / maybe, here's a summary of the evidence on this question, and here's our public track record of past advice. Here is our quote for insurance against an adverse reaction from taking this."
In short, a real FDA would be info-eliciting.
In short, a real FDA would be info-eliciting.
FDA would still be provisioning a public good (knowledge about consumer products), and that costs money. The difference is how we discipline FDA to *actually prove* that it has found out stuff and relayed those results accurately, without the obvious incentive distortions.