While recent conversations surrounding AI bias were necessary & useful, several notable works of prolific researchers on the topic were never acknowledged or even quoted. So, I am starting a thread to discuss important work on fairness, explainability, and ethics (1/N) #aibias
This thread will focus on works of researchers who never got their due in the discussions that happened in the past week. I will include works by my favorite researchers across all races/genders. Please feel free to comment & add about anyone I might have missed. (2/N)
Cynthia Dwork and @mrtz deserve to be mentioned on top of this list. They started working on fairness in 2012 when the whole world was not sure about why fair algorithms were even needed. Their paper "Fairness through awareness" is one of my most favorite papers. (3/N)
I would encourage anyone new to fairml to check out Moritz's book https://fairmlbook.org  and his tutorial on fairness https://mrtz.org/nips17/#/  Tweets won't do justice to his amazing body of work on fairness. Please check out his webpage: https://mrtz.org/  (4/N)
Cynthia Dwork is also a pioneer in both fairness and differential privacy. She has done fundamental work on both these topics. Please check out and (5/N)
Jon Kleinberg is next on this list. Jon and several of his students and postdocs including @hima_lakkaraju, @manish_raghavan, @maithra_raghu, and @HodaHeidari have written bulk of my favorite papers on the topics of fairness, explainability, and AI assisted decision making (6/N)
Jon Kleinberg (+ @hima_lakkaraju) wrote one of my favorite papers which provides guidelines for thinking about several of the issues that arise when designing/evaluating AI tools for important decisions. If you haven't already seen this, check out: https://www.nber.org/papers/w23180  (7/N)
Jon Kleinberg (+ @manish_raghavan) have written another amazing paper on understanding the trade-offs between different notions of fairness and why they are fundamentally incompatible. https://arxiv.org/abs/1609.05807  (8/N)
Jon Kleinberg (+ @maithra_raghu)'s paper on algorithmic automation problem also sheds light on various critical aspects of automating decision making https://arxiv.org/abs/1903.12220  (9/N)
It is impossible to do justice to Prof. Kleinberg's work on fairness and related topics just via tweets. Please check out his webpage for a full slew of his papers: http://www.cs.cornell.edu/home/kleinber/ 
@hima_lakkaraju has done very important work on various aspects of ML assisted decision-making -- explainability, fairness, & detecting biases. Her course on explainability is what got me interested in the FATML space in first place: https://interpretable-ml-class.github.io/  (11/N)
@hima_lakkaraju's work on exposing vulnerabilities of explanation methods, and how they mislead end users into trusting biased algorithms is one of the best papers I have seen on this topic recently. https://arxiv.org/pdf/1911.02508.pdf and https://arxiv.org/pdf/1911.06473.pdf (12/N)
@HodaHeidari has also been doing some amazing work on the topic of algorithmic fairness. Her papers https://arxiv.org/pdf/1902.04783.pdf and https://www.cs.cornell.edu/~hh732/heidari2018fairness.pdf are a must read. (13/N)
@kamalikac is one of my favorite researchers on trustworthy ML. I learned about basics of differential privacy and trustworthy ML from her tutorials and courses: https://vimeo.com/248492174  and http://cseweb.ucsd.edu/classes/sp20/cse291-b/ (14/N)
@FinaleDoshi is an amazing researcher who works on interpretability, RL & healthcare. Her position paper on interpretability (+ @_beenkim) is a must read https://arxiv.org/abs/1702.08608 . Another paper on accountability of AI is also a revelation https://arxiv.org/abs/1711.01134  (16/N)
@ecekamar's work on complementary human/machine decision making is a must read in FATML. My favorite work includes detecting and fixing blind spots of ML models which arise due to dataset biases. See https://arxiv.org/abs/1610.09064  (+ @hima_lakkaraju) & https://arxiv.org/abs/1805.08966  (18/N)
@2plus2make5 has also done incredible work on detecting discrimination and algorithmic decision making. If you haven't already, please check out: https://arxiv.org/abs/1701.08230  and https://arxiv.org/abs/1702.08536  (19/N)
@hannawallach and @jennwvaughan have also been doing amazing work at the intersection of fairness, interpretability, and HCI. Check out some of their amazing work at http://www.jennwv.com/papers/interp-ds.pdf & http://www.jennwv.com/papers/accuracy-trust.pdf (20/N)
@kgummadi also has an amazing body of work on fairness and algorithmic decision making. He is one of the most underrated researchers on this topic. While he has several papers on the topic, see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3465622 and https://arxiv.org/abs/1507.05259  (21/N)
@sameer_ is another important name in interpretability literature. His paper on LIME https://arxiv.org/abs/1602.04938  is extremely well known. He also has a lot of interesting work on biases and interpretations ( https://arxiv.org/pdf/2005.00724.pdf) in NLP. (22/N)
@Aaroth and @mkearnsupenn are another set of researchers who have done some very important and foundational work on fairness and discrimination. I recently started reading their book on ethical machine learning and it has been a revelation. https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205 (23/N)
I am sure I am missing a bunch of other amazing folks working in FATML. I also want to reemphasize that @le_roux_nicolas's list ( https://twitter.com/le_roux_nicolas/status/1277227931031547904) covers a lot of my favorite researchers on the topic. Please feel free to comment below about your favorite researchers. (N/N)
You can follow @suguna_misra.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: