Thank you members of the Commerce Committee for the opportunity to speak with the American people about Twitter and §230. My remarks will be brief to get to questions. §230 is the most important law protecting internet speech. Removing §230 will remove speech from the internet.
§230 gave internet services two important tools. The first provides immunity from liability for user’s content. The second provides “Good Samaritan” protections for content moderation and removal, even of constitutionally protected speech, as long as it’s done “in good faith.”
That concept of “good faith” is what’s being challenged by many of you today. Some of you don’t trust we’re acting in good faith. That’s the problem I want to focus on solving. How do services like Twitter earn your trust? How do we ensure more choice in the market if we don’t?
There are three solutions we’d like to propose to address the concerns raised, all focused on services that decide to moderate or remove content. They could be expansions to §230, new legislative frameworks, or a commitment to industry wide self-regulation best practices.
The first is requiring a service’s moderation process to be published. How are cases reported and reviewed? How are decisions made? What tools are used to enforce? Publishing answers to questions like these will make our process more robust and accountable to the people we serve.
The second is requiring a straightforward process to appeal decisions made by humans or algorithms. This ensures people can let us know when we don't get it right, so that we can fix any mistakes and make our processes better in the future.
And finally, much of the content people see today is determined by algorithms, with very little visibility into how they choose what they show. We took a first step in making this more transparent by building a button to turn off our home timeline algorithms. It’s a good start.
We’re inspired by the market approach suggested by Dr. Stephen Wolfram before this committee in June 2019. Enabling people to choose algorithms created by third parties to rank and filter their content is an incredibly energizing idea that’s in reach. https://writings.stephenwolfram.com/2019/06/testifying-at-the-senate-about-a-i-selected-content-on-the-internet/
Requiring 1) moderation process and practices to be published, 2) a straightforward process to appeal decisions, and 3) best efforts around algorithmic choice, are suggestions to address the concerns we all have going forward. And they’re all achievable in short order.
It’s critical as we consider these solutions, we optimize for new startups and independent developers. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems. We mustn’t entrench the largest companies any further.
Thank you for the time, and I look forward to a productive discussion to dig into these and other ideas.
You can follow @jack.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: