What's the "tradeoff" between interpretability & accuracy? Unfortunately, no one agrees on what's "interpretable". To move the needle an inch, @KDziugaite, Shai Ben-David, and I propose to model the *act* of enforcing interpretability as constrained ERM. https://arxiv.org/abs/2010.13764 
This paper seems to really irk and confuse reviewers. Many want us to define what intepretable means. Others don't understand how studying the effect of an abstract constraint could bear on interpretability because... interpretability is special?
From our perspective, there's something to be gained reasonable about why one may or may not see a tradeoff when imposing interpretability. I think there's a more Bayesian story to tell here too about the application of black box machine learning tools and epistemic uncertainty.
Anyway, we are all interested to hear feedback and to see people help us build up some foundations.
You can follow @roydanroy.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: