THREAD: In the past week, I've received the same general question from four different media outlets, so it's worth amplifying the answer: a platform's decision to moderate (or not moderate) user content does not remove 230 protections.
Whether it *should* is a different question, and would require a significant change to the law (and, of course, that is very much a possibility). But in its current form and under all applicable caselaw, platforms can moderate however they want. That was a goal of 230.
If the moderation adds something that is defamatory or otherwise illegal, then that could trigger liability. But the defamation would have to be added by the platform. If the platform removed one defamatory comment but did not delete the other comment, 230 still applies.
230 also protects platforms for liability arising from good-faith moderation of objectionable content. There is a "good faith" caveat to this portion, though courts often do not even need to address that because they rule that 230 applies under the broader 230 provision.
There is not a large body of caselaw exploring what is "good faith." @AnnemarieBridy has very thoughtfully written about the issue here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3154117
Additionally, even if a litigant could convince a court that moderation was not in good faith, there would need to be an underlying cause of action. That is a big hurdle separate from 230.
I reiterate my offer to any reporter: I'm happy to talk with you about 230 at any time. We need accurate coverage of 230. My cell phone and email are at http://jeffkosseff.com . Will talk any time on deadline as long as I am not busy with teaching or parenting responsibilities
You can follow @jkosseff.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: