For a @themarkup feature, @leonyin and @asankin compiled a list of "social and racial justice terms" with help from @ColorOfChange, @mediajustice, @conmijente and @muslimadvocates, then checked if @YouTube would let them target those terms for ads.

https://themarkup.org/google-the-giant/2021/04/09/google-blocks-advertisers-from-targeting-black-lives-matter-youtube-videos

1/
The results are (initially, at least), quite shocking: Youtube bans advertisers from targeting videos using keywords like #BlackLivesMatter , Black power, reparations, colonialism, antifascist, American Muslim, and sex work.

2/
Even worse: when the reporters asked Youtube for comment on these blocks, the company stonewalled them, and then added even more terms to the blocklist, including Black excellence, LGBTQ, antiracism, civil rights...

3/
As if that wasn't enough, there's the list of terms that Youtube DOES allow ad-targeting on, including white power, white lives matter, white power, etc.

5/
The contradictions go further: you can advertise to "Christian parenting" and "Jewish parenting" but not "Muslim parenting." Racist terms like "white sharia" and "civilizational jihad" are in, too.

6/
After Youtube was called for comment, they started blocking "Christian" and "Jewish" as prefixes on the same keywords that were blocked when associated with "Muslim."

7/
Youtube's policies offer two explanations for this, the first ("[ads should] ads to reflect a user’s interests rather than more personal interpretations of their fundamental identity") is thoroughly unconvincing. It's literally nonsense.

8/
The second, though ("[targeting categories could be] used to stigmatize an individual") is both hugely revealing and hugely incomplete, and therein lies the tale.

https://support.google.com/adspolicy/answer/143465?hl=en#zippy=%2Ctroubleshooter-sexual-orientation-in-personalized-advertising

9/
On the other hand, you have the platform's utility to reactionary, racist, genocidal and eugenic communities who are totally in opposition to Youtube's claimed support for racial justice.

11/
Some of that is unwitting - the company can't possibly know what's in all the videos published on its platform - and some is deliberate: Youtube doesn't want to face the reputational, political and financial consequences of cutting off superstars like Prageru.

12/
They know if that if they allow advertisers to target "Black Lives Matter," some of those ads will show up alongside of Prageru's racist video, "'Black Lives Matter' Is Not Helping Blacks."

13/
That's the heart of the contradiction. Sometimes, Youtube wants us to think of its self-serve, algorithmic ad/publishing system as untouched by human hands, an interplay of pure math, initiated and steered by third parties whose choices are not Youtube's responsibility.

14/
Other times, Youtube wants us to think of it as a corporate person, with identities and values, priorities and ethics. The selective demand that Youtube be considered a moral actor - but only for the outcomes that reflect well on the company - leads to this contradiction.

15/
To be clear, I don't think there's any way Youtube COULD operate a self-serve ad platform or a self-serve video program that could proactively identify racist outcomes.

16/
It's not enough to vet every ad to make sure it's not racist - they'd also have to vet every possible ad PLACEMENT and make sure that it doesn't violate its ethics; that is, they'd have to use reliable human judgment to evaluate every single combination of ads and videos.

17/
There isn't enough human judgement - let alone sound human judgement - in existence to cover that combinatorial explosion. What's more, Youtube is so consequential to our discourse that its errors would be - and are - hugely consequential as well.

18/
That's why all this matters: Youtube's editorial choice has the foreseeable (and, evidently, acceptable to Youtube) outcome of producing an economic boycott of the creators it says it wants to uplift and support.

19/
Youtube's monopolistic dominance has the effect of making its contradictions matters of civilizational importance.

It wants to be:

* Imperfect

* Moral

* Neutral

* Dominant

and

* Forgiven

It can't have all of those. It just can't.

20/
And to be perfectly honest, I don't know what I want it to do here. I mean, it could stop spinning idiotic tales about "[ads that] reflect a user’s interests rather than more personal interpretations of their fundamental identity," but that wouldn't fix things.

21/
Likewise, it could ban the words "white" and "Christian" in association with all same the keywords it blocks in connection with "Black" and "Muslim," producing a kind of evenhanded idiocy, which is preferable to a biased idiocy.

22/
And it could be more transparent in its "brand safety" tactics, and have some process for appealing bad choices, as @nandoodles - who cofounded Check My Ads - sensibly calls for. They should do this, but it still would leave the contradiction - and its consequences - intact.

23/
Thinking about this stuff gives me a headache. On the other hand, it reminded me to order a copy of SILICON VALUES, the new book from my @EFF colleague @jilliancyork, who is far and away the content moderation expert I trust most in this world.

http://siliconvaluesbook.com 

24/
If you'd like an unrolled version of this thread to read or share, here's a link to it on http://pluralistic.net , my surveillance-free, ad-free, tracker-free blog:

Image: https://pluralistic.net/2021/04/10/brand-safety-rupture/#brand-safety

Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg

CC BY:
https://creativecommons.org/licenses/by/3.0/deed.en

eof/
You can follow @doctorow.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: