After spending a few years cutting through Big Tech's B.S. about AI ethics, I've created a glossary to help you decode what all of their favorite terms actually mean.

I had way too much fun working on this! Threading some of my favorites below. https://www.technologyreview.com/2021/04/13/1022568/big-tech-ai-ethics-guide/
AGI (phrase) - A hypothetical AI god that’s probably far off in the future but also maybe imminent. Can be really good or really bad whichever is more rhetorically useful. Obviously you’re building the good one. Which is expensive. Therefore, you need more money.
human-centered design (ph) - A process that involves using “personas” to imagine what an average user might want from your AI system. May involve soliciting feedback from actual users. Only if there’s time.
human in the loop (ph) - Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.
regulation (n) - What you call for to shift the responsibility for mitigating harmful AI onto policymakers. Not to be confused with policies that would hinder your growth.
transparency (n) - Revealing your data and code. Bad for proprietary and sensitive information. Thus really hard; quite frankly, even impossible. Not to be confused with clear communication about how your system actually works.
wealth redistribution (ph) - A useful idea to dangle around when people scrutinize you for making way too much money. How would wealth redistribution work? Universal basic income, of course. Also not something you could figure out yourself. Would require regulation.
h/t to all the people who weighed in with fantastic suggestions last week. https://twitter.com/_KarenHao/status/1379428785657765910
You can follow @_KarenHao.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: