Some folks are uncritically arguing that only *architectures* matter in deep learning, not datasets nor anything else (So Fei-Fei Li creating ImageNet didn't contribute much 🙄)

This narrow tunnel focus is so harmful to the field.
Other things that matter in deep learning (and that we should start valuing more): framing problems, collecting data, interpreting results, communicating results, encouraging adoption, ETHICS of what we're doing https://twitter.com/math_rachel/status/1135709270928961536?s=20
This is a great paper by @timnitGebru @unsojo on why we should be giving a lot more thought to how we curate our datasets in ML: https://twitter.com/math_rachel/status/1223799130180349953?s=20
GenderShades is great research for many reasons, but I love that it was specifically designed to have a real world impact. This sort of thoughtful design & effective communication is too rare in machine learning. @jovialjoy

(see more in @rajiinio talk) https://twitter.com/math_rachel/status/1163637485533929473?s=20
Another area that is often neglected in deep learning is research on how to do things in a cheaper, simpler way, using fewer resources: https://twitter.com/math_rachel/status/1141826118128828416?s=20
Related: hero-worship & under-valuing communities is harmful https://twitter.com/math_rachel/status/1253033620857651201?s=20
You can follow @math_rachel.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: