Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML

Zixi Chen · Varshini Subhash · Marton Havasi · Weiwei Pan · Finale Doshi-Velez


Abstract:

Interpretability provides a means for humans to verify aspects of machine learning (ML) models. Different tasks require explanations with different properties. However, presently, there is a lack of standardization in assessing properties of explanations: different papers use the same term to mean different quantities, and different terms to mean the same quantity. This lack of standardization prevents us from rigorously comparing explanation systems. In this work, we survey explanation properties defined in the current interpretable ML literature, we synthesize properties based on what they measure, and describe the trade-offs between different formulations of these properties. We provide a unifying framework for comparing properties of interpretable ML.

Chat is not available.