Challenging common interpretability assumptions in feature attribution explanations
Unpublished · Jonathan Dinu, Jeffrey Bigham, Zico Kolter

With algorithmic and autonomous systems becoming more ubiquitous in everyday life, there has been a new interest in understanding user perceptions of these systems as well as in understanding how these systems behave under various conditions and inputs. Researchers have responded to this need with explainable AI (XAI), but often proclaim interpretability axiomatically without evaluation. When they are evaluated, these methods are often tested through offline simulations with proxy metrics of interpretability (such as model complexity).

We empirically evaluate the veracity of three common interpretability assumptions through a large scale human-subjects experiment with a simple “placebo explanation” control. We find that feature attribution explanations provide marginal utility in our task for a human decision maker and in certain cases result in worse decisions due to cognitive and contextual confounders. This result challenges the assumed universal benefit of applying these methods and we hope this work will underscore the importance of human evaluation in XAI research.

back · whoami · teaching · projects · talks · writing · cv · colophon · join