I do have double standards, and it doesn’t bother me 🤔

One standard for people I think are acting in good faith, one for people I think are acting in bad faith.

I’m not a court of law, or otherwise formally behind a veil-of-ignorance. I don’t owe the world a single standard.
A single standard is a peculiar thing. It’s a feature of due processes that are purposely designed to be blind to context in specific ways for specific reasons.

Context is a huge bucket of illegible things when judging good/bad faith — identity, trust, intentions, circumstances.
If every situation is a 1000 variables, a reasonable standard can only account for a dozen perhaps. When you want to avoid worst-case outcomes like innocent people being punished, you design a single standard around a principle like reasonable doubt/presumption of innocence.
So single standards typically emerge when you want to allocate the benefit of doubt of illegible context in an extreme way. Always in favor, or always against.

You always presume innocence in a court of law.

You always presume a Cape Buffalo will be an asshole and attack you.
If you can only accommodate a dozen variables in a thousand in a “fair” due process (including blindness-by-design to some) of a flowchart say, what about the other 988?

You can’t disregard it outside of formally closed contexts like legal discovery/admission into evidence.
The most basic thing you can do with 988 variables pattern recognition. Gut feelings/intuition/System 1. And the most bad8c kind of pattern recognition is classification into friendly and hostile. That’s the most natural kind of double standard. One for friends, one for threats.
Obviously this can go badly wrong very easily. We’re primed to map this standard to family vs not. My tribe vs yours. Generalized ingroup vs outgroup. These are what I call uncritical double standards. Ones you adopt unconsciously, often via imitation of authority figures.
But the dangers of uncritical double standards should not drive you to the opposite end of the spectrum, of clueless single standards that involve perverse self-blinding to illegible context out of a perverse desire to be “consistent” like you’re a court. You’re not a court.
A *critical* double standard is two things:

a) A clear but illegible in/out sorting function. Legibility of a gut instinct is a clear sign of uncritical prejudice.

b) Reserving the right to choose when you’ll make an effort to explain yourself and when you won’t bother.
On twitter for example, most people have a gut sense of concern trolling even if they haven’t heard the term. You know when someone is doing it (maliciously or unconsciously). So unless you’re dumb, you engage at your discretion and don’t explain yourself when you don’t.
Yes there are classification errors. Sometimes you apply good-faith rules of engagement to bad-faith people and vice-versa. Correct the error, use it to become more mindful of your critical pattern matching, move on. Don’t agonize over it.
The only real discipline you need is a sense of your own power to hurt others. Outside of institutional due process contexts, it is actually really hard to hurt someone by simply refusing to deal with them. If your “bad faith standard” is disengagement you’re probably fine.
If your bad-faith standard is active (like open hostility or quietly working to get someone blacklisted or persona non-grata on some social graph) then you have to police it harder. But it’s okay to have standards for picking actual conflicts.
The idea of explainable or justifiable decisions is kinda dumb outside closed contexts. Which is why the explainable AI conversation is both interesting and tedious. When you want to use AI for due process contexts, it’s an interesting challenge to “blind” it to some things.
Blinding is not explainability. You could blind a hiring algorithm to gender say, by doing statistical testing and removing inputs that provides a gender hint. That still won’t mean decisions are explainable. They;l simply be demonstrably statistically agnostic to some variable.
Demanding a clear logical account of a decision is silly for almost everything. You shouldn’t expect it of either humans or machines most of the time.

A decision is 3 things: input blinding, intuitive, classification into regimes, application of regime-specific standards.
You can follow @vgr.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: