Timezone: »
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, and provide very little insight into their correctness and reliability. In addition, these methods are also computationally inefficient, and require significant hyper-parameter tuning. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework.
Author Information
Dylan Slack (UC Irvine)
Anna Hilgard (Harvard University)
Sameer Singh (University of California, Irvine)
Himabindu Lakkaraju (Stanford University)
More from the Same Authors
-
2021 : Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors »
Dylan Slack · Krishnaram Kenthapadi -
2021 : [S6] Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors »
Dylan Slack · Krishnaram Kenthapadi -
2021 Poster: Towards Robust and Reliable Algorithmic Recourse »
Sohini Upadhyay · Shalmali Joshi · Himabindu Lakkaraju -
2021 Poster: Learning Models for Actionable Recourse »
Alexis Ross · Himabindu Lakkaraju · Osbert Bastani -
2021 Poster: Counterfactual Explanations Can Be Manipulated »
Dylan Slack · Anna Hilgard · Himabindu Lakkaraju · Sameer Singh -
2020 Poster: From Predictions to Decisions: Using Lookahead Regularization »
Nir Rosenfeld · Anna Hilgard · Sai Srivatsa Ravindranath · David Parkes -
2019 : Poster session »
Jindong Gu · Alice Xiang · Atoosa Kasirzadeh · Zhiwei Han · Omar U. Florez · Frederik Harder · An-phi Nguyen · Amir Hossein Akhavan Rahnama · Michele Donini · Dylan Slack · Junaid Ali · Paramita Koley · Michiel Bakker · Anna Hilgard · Hailey James-Sorenson · Gonzalo Ramos · Jialin Lu · Jingying Yang · Margarita Boyarskaya · Martin Pawelczyk · Kacper Sokol · Mimansa Jaiswal · Umang Bhatt · David Alvarez-Melis · Aditya Grover · Charles Marx · Mengjiao Yang · Jingyan Wang · Gökhan Çapan · Hanchen Wang · Steffen Grünewälder · Moein Khajehnejad · Gourab Patro · Russell Kunes · Samuel Deng · Yuanting Liu · Luca Oneto · Mengze Li · Thomas Weber · Stefan Matthes · Duy Patrick Tu -
2016 Poster: Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making »
Himabindu Lakkaraju · Jure Leskovec