One of my research topics in grad school was fairness, accountability and transparency (FAT) in NLP systems. I've kept up with the literature.

Here's a quick thread of papers I'd recommend reading on the topic if you want to get up to speed.
"The Social Impact of Natural Language Processing" by @dirk_hovy and @ShannonSpruit. One of the earliest landmark papers on the topic, reviews different types of harm (with examples) https://www.aclweb.org/anthology/P16-2096.pdf
"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings" by Bolukbasi et al. http://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-d

This paper set off a large body of research on bias in word embeddings. (Note: I personally don't think this is the most urgent ethical issue.)
"Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints" by @jieyuzhao11 et al https://www.aclweb.org/anthology/D17-1323.pdf

This is an important paper because it shows that ML systems can not just learn but also *amplify* underlying biases.
"Principled Frameworks for Evaluating Ethics in NLP Systems" by Prabhumoye et al https://www.aclweb.org/anthology/W19-3637/

Another recent paper that I like for its clear overview of relevant framework from ethics.
A final note:

- ML systems have the clear potential to harm a lot of people
- It's the responsibility of EVERYONE who works on them, researchers and practitioners alike, to mitigate that harm to the greatest extent possible
You can follow @rctatman.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: