Timezone: »
Model interpretability is an increasingly important component of practical machine learning. Some of the most common forms of interpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability is designing explanation systems that can capture aspects of each of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that uses local linear modeling techniques along with a dual interpretation of random forests (both as a supervised neighborhood approach and as a feature selection method). MAPLE has two fundamental advantages over existing interpretability systems. First, while it is effective as a black-box explanation system, MAPLE itself is a highly accurate predictive model that provides faithful self explanations, and thus sidesteps the typical accuracy-interpretability trade-off. Specifically, we demonstrate, on several UCI datasets, that MAPLE is at least as accurate as random forests and that it produces more faithful local explanations than LIME, a popular interpretability system. Second, MAPLE provides both example-based and local explanations and can detect global patterns, which allows it to diagnose limitations in its local explanations.
Author Information
Gregory Plumb (CMU)
Denali Molitor (University of California, Los Angeles)
Ameet Talwalkar (CMU)
More from the Same Authors
-
2021 : Simulated User Studies for Explanation Evaluation »
Valerie Chen · Gregory Plumb · Nicholay Topin · Ameet S Talwalkar -
2021 : Bayesian Persuasion for Algorithmic Recourse »
Keegan Harris · Valerie Chen · Joon Sik Kim · Ameet Talwalkar · Hoda Heidari · Steven Wu -
2021 : Bayesian Persuasion for Algorithmic Recourse »
Keegan Harris · Valerie Chen · Joon Kim · Ameet S Talwalkar · Hoda Heidari · Steven Wu -
2021 : Bayesian Persuasion for Algorithmic Recourse »
Keegan Harris · Valerie Chen · Joon Kim · Ameet S Talwalkar · Hoda Heidari · Steven Wu -
2022 : AutoML for Climate Change: A Call to Action »
Renbo Tu · Nicholas Roberts · Vishak Prasad C · Sibasis Nayak · Paarth Jain · Frederic Sala · Ganesh Ramakrishnan · Ameet Talwalkar · Willie Neiswanger · Colin White -
2022 Competition: AutoML Decathlon: Diverse Tasks, Modern Methods, and Efficiency at Scale »
Samuel Guo · Cong Xu · Nicholas Roberts · Misha Khodak · Junhong Shen · Evan Sparks · Ameet Talwalkar · Yuriy Nevmyvaka · Frederic Sala · Anderson Schneider -
2022 Poster: Use-Case-Grounded Simulations for Explanation Evaluation »
Valerie Chen · Nari Johnson · Nicholay Topin · Gregory Plumb · Ameet Talwalkar -
2022 Poster: Provably tuning the ElasticNet across instances »
Maria-Florina Balcan · Misha Khodak · Dravyansh Sharma · Ameet Talwalkar -
2022 Poster: Learning Predictions for Algorithms with Predictions »
Misha Khodak · Maria-Florina Balcan · Ameet Talwalkar · Sergei Vassilvitskii -
2022 Poster: Efficient Architecture Search for Diverse Tasks »
Junhong Shen · Misha Khodak · Ameet Talwalkar -
2022 Poster: Bayesian Persuasion for Algorithmic Recourse »
Keegan Harris · Valerie Chen · Joon Kim · Ameet Talwalkar · Hoda Heidari · Steven Wu -
2022 Poster: NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks »
Renbo Tu · Nicholas Roberts · Misha Khodak · Junhong Shen · Frederic Sala · Ameet Talwalkar -
2021 : [S9] Simulated User Studies for Explanation Evaluation »
Valerie Chen · Gregory Plumb · Nicholay Topin · Ameet S Talwalkar -
2021 : Bayesian Persuasion for Algorithmic Recourse »
Keegan Harris · Valerie Chen · Joon Sik Kim · Ameet Talwalkar · Hoda Heidari · Steven Wu -
2021 Poster: Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing »
Mikhail Khodak · Renbo Tu · Tian Li · Liam Li · Maria-Florina Balcan · Virginia Smith · Ameet Talwalkar -
2021 Poster: Rethinking Neural Operations for Diverse Tasks »
Nicholas Roberts · Mikhail Khodak · Tri Dao · Liam Li · Christopher Ré · Ameet Talwalkar -
2021 Poster: Learning-to-learn non-convex piecewise-Lipschitz functions »
Maria-Florina Balcan · Mikhail Khodak · Dravyansh Sharma · Ameet Talwalkar -
2020 Workshop: International Workshop on Scalability, Privacy, and Security in Federated Learning (SpicyFL 2020) »
Xiaolin Andy Li · Dejing Dou · Ameet Talwalkar · Hongyu Li · Jianzong Wang · Yanzhi Wang -
2020 Poster: Regularizing Black-box Models for Improved Interpretability »
Gregory Plumb · Maruan Al-Shedivat · Ángel Alexander Cabrera · Adam Perer · Eric Xing · Ameet Talwalkar -
2019 : TBD »
Ameet Talwalkar -
2019 Poster: Adaptive Gradient-Based Meta-Learning Methods »
Misha Khodak · Maria-Florina Balcan · Ameet Talwalkar -
2017 Poster: Variable Importance Using Decision Trees »
Jalil Kazemitabar · Arash Amini · Adam Bloniarz · Ameet S Talwalkar -
2017 Poster: Federated Multi-Task Learning »
Virginia Smith · Chao-Kai Chiang · Maziar Sanjabi · Ameet S Talwalkar -
2016 : Invited Talk: Paleo: A Performance Model for Deep Neural Networks (Ameet Talwalkar, UCLA) »
Ameet S Talwalkar -
2016 Poster: Yggdrasil: An Optimized System for Training Deep Decision Trees at Scale »
Firas Abuzaid · Joseph K Bradley · Feynman Liang · Andrew Feng · Lee Yang · Matei Zaharia · Ameet S Talwalkar -
2014 Workshop: Distributed Machine Learning and Matrix Computations »
Reza Zadeh · Ion Stoica · Ameet S Talwalkar -
2011 Workshop: Sparse Representation and Low-rank Approximation »
Ameet S Talwalkar · Lester W Mackey · Mehryar Mohri · Michael W Mahoney · Francis Bach · Mike Davies · Remi Gribonval · Guillaume R Obozinski -
2011 Poster: Divide-and-Conquer Matrix Factorization »
Lester W Mackey · Ameet S Talwalkar · Michael Jordan -
2010 Workshop: Low-rank Methods for Large-scale Machine Learning »
Arthur Gretton · Michael W Mahoney · Mehryar Mohri · Ameet S Talwalkar -
2009 Poster: Ensemble Nystrom Method »
Sanjiv Kumar · Mehryar Mohri · Ameet S Talwalkar