"If FB has a dial that can turn hateful content down, why doesn't it turn it down all the time?" is a good and important question. The answer is exactly the precision recall tradeoff https://en.wikipedia.org/wiki/Receiver_operating_characteristic
You can catch all hate speech by deleting every post on Facebook, but you'll have a lot of false positives. You can eliminate all false positives by never deleting a post, but you'll miss all the hate speech. Facebook has to choose a point along that continuum.
Turning down the hateful content knob is exactly the same as turning up the false positives knob. And depending on where you are on that curve, it can be a bad bargain—the number of false positives that get created might be a lot higher than the hateful posts that get detected
And without getting all Free Expressioney on here, false positives can actually be very bad. People rightfully really hate having their non-hate content removed as hate speech, and it can result in account suspensions, bans, etc.
On top of all this, most platforms allow you to appeal decisions that you disagree with. Turning up the False Positive knob will increase the volume of appeals, creating more busywork for content reviewers who could be focusing on removing hate speech.
(of course, some fraction of the true positives will also appeal)
anyway it's complicated stuff
BTW I'm not trying to suggest in any way that Facebook or any other company is remotely close to the optimal point of the tradeoff. I just think the tradeoff needs to be acknowledged and understood, particularly by external policymakers and pundits. https://twitter.com/colin_fraser/status/1371270324491227150
You can follow @colin_fraser.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: