Our ruling class has fully embraced the danger of information hazards. In fact, they have decided they are so dangerous that they now advise you to stop thinking all together. The WHO to advise that you wear a mask on your brain.
An information hazard is a piece of true information that causes some harm to the person who learns it. Bostrom identifies six types hazardous information transfer: data, ideas, templates, signals, attention, and evocations.
A data hazard is a specific, empirical piece of information that poses a risk. For example, the exact genetic sequence of a deadly virus, or a schematic of a nuclear bomb
An idea hazard is a data hazard without specific data. Communism is an idea hazard, because if you implement it, it destroys your economy and causes famines and holiness spirals
A template hazard is a bad example that you decide to follow. e.g. China instituting a full lockdown was a template hazard for the rest of the world, who followed their example and rekt their economies
A signaling hazard is when knowing a piece of information sends a signal that causes others to treat you badly. Crime statistics and IQ research are signaling hazards.
An attention hazard is when a true but relatively unimportant piece of information distracts you from another, critical piece of information. Most of twitter is an attention hazard (everything on twitter is true)
An evocation hazard is when a true piece of information is presented in such a way that causes psychological harm. A video of starving 3rd world children is an evocation hazard
In addition to the infohazard typology by information transfer, we can also classify infohazards by the type of risk they present: adversarial risks, market risk, error risk, psychological risk, information system risk, and development risk
Adversarial and market risks are the types of infohazards identified by Schelling in his essay "On Bargaining," summarized by Robin Hanson in The Elephant in the Brain https://twitter.com/0x49fa98/status/1346115108154593281
Information system risk is when data causes harm because it precipitates a dangerous state in a computer system (e.g., a robot, an AI, or even an email program) This could be as innocuous as a UI bug or as imposing as an automated MAD nuclear missile launcher.
Development risk is when information could lead to the production of a new technology that poses an existential risk. Nanotechnology is an infohazard with respect to gray goo, neuralink is a development risk for wireheading as the great filter.
Psychological risk is when your reaction to true information harms your ability to enact your intentions. Disappointment, embarrassment, and loss of motivation or "mindset" are forms of psych risk. Learning biographical details about your friends can destroy social capital
When we evolved the power to understand and propagate arbitrary information, we became vulnerable to information hazards. This was a novel danger; only humans can be hurt by true information.
Finally, error risk is when true information can cause an agent to make dangerous mistakes. Bostrom also includes neuropsychological hazards such as the one in the story BLIT, a perception that exploits human brain architecture to kill anyone who sees it https://en.wikipedia.org/wiki/BLIT_(short_story)
Neuropsychological hazards are fun (my favorite) but the more common type of error risk is presented by ideologies and biases. Ideology risk: if you believe there are no psychological differences between men and women, you will have trouble dealing with sexual contingencies.
True information can also skew your perceptions. If you think there's an epidemic of violence against a particular group, then an anecdote of violence against a member of that group will confirm your bias, regardless of ground truth.
Of course, there is a problem with framing ideology and epistemic risk in terms of bias. All thinking is biased. To be more precise, all reasoning is motivated. https://twitter.com/0x49fa98/status/1352661369024339971?s=20
We can build AIs today that are capable of thinking, but no one would accuse them of reasoning. This is not because they lack intelligence, but because they lack motivation.
An AI was programmed to seek novelty, and this made it capable of solving a maze, but it encountered the distinctly human problem of procrastinating in front of a TV.

A cynical man might suggest that our motivation mechanisms are also this crude https://ai.googleblog.com/2018/10/curiosity-and-procrastination-in.html
Nevertheless, biasing hazards are real risks, but the bias must be framed in relative terms. The implicit assumption of "eliminating bias" is that there is some pristine unbiased view of the world that we more or less know, and we just have to train people into recognizing it
In most cases, biasing hazard is less about things that pull you from the truth, and more about things that pull you from the center of social consensus. This is rule zero of power, which you've heard many times before. https://twitter.com/0x49fa98/status/1281386347035611136?s=20
In light of this, it's fun to notice that we consistently underestimate the risks posed by infohazards. There's a popular posture that we often take, as if we are so cold and realistic, no truth can harm us, that we will gaze into the void and master it...
but in fact the void often wins staring contests. We are self-deceiving creatures, this is another point I have often brought you. And self-deception can be socially useful, but we can also see it as an evolved defense against infohazards https://twitter.com/0x49fa98/status/1023551251450028035
Infohazards are so pervasive and dangerous that we have evolved a defense against them in the form of strategic epistemic failure.
It's hard to believe things that go against social consensus, when you know they are signaling hazards. If someone presents you with an ironclad case for a socially dangerous idea, you will tend to doubt it or dismiss it. You might agree one moment, and forget the next.
For example, with Gellman amnesia you forget a disturbing observation the moment it leaves your field of attention. How many other jarring revelations don't stick in your head this way? You'll never know https://twitter.com/0x49fa98/status/1027597626504400896
My favorite example is this study, which shows we don't believe scientific studies that come to negative conclusions about women.

If this finding doesn't match your predilections, you will also surely dismiss it https://twitter.com/0x49fa98/status/1157667886644600832
And why not? The truth is all persuasion is grounded, not in reason, but in strength. When a logical argument convinces you of something, it's not the mechanics of reason that does it, it's the display of power that reason presents.
A skilled speaker not only projects power, but offers you power. "Think as I do, and you can wield some of my power." All persuasion is seduction. If the truth smells of weakness, most of us follow our nose.

If the news is no longer convincing, that suggests a loss of power.
You can follow @0x49fa98.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: