Timezone: »
Feature attribution for kernel methods is often heuristic and not individualised for each prediction. To address this, we turn to the concept of Shapley values (SV), a coalition game theoretical framework that has previously been applied to different machine learning model interpretation tasks, such as linear models, tree ensembles and deep networks. By analysing SVs from a functional perspective, we propose RKHS-SHAP, an attribution method for kernel machines that can efficiently compute both Interventional and Observational Shapley values using kernel mean embeddings of distributions. We show theoretically that our method is robust with respect to local perturbations - a key yet often overlooked desideratum for consistent model interpretation. Further, we propose Shapley regulariser, applicable to a general empirical risk minimisation framework, allowing learning while controlling the level of specific feature's contributions to the model. We demonstrate that the Shapley regulariser enables learning which is robust to covariate shift of a given feature and fair learning which controls the SVs of sensitive features.
Author Information
Siu Lun Chau (University of Oxford)
Robert Hu (Amazon)
Javier González (Microsoft, Health Futures)
Dino Sejdinovic (University of Adelaide)
More from the Same Authors
-
2021 : Invariant Priors for Bayesian Quadrature »
Masha Naslidnyk · Javier González · Maren Mahsereci -
2022 : Bayesian inference for aerosol vertical profiles »
Shahine Bouabid · Duncan Watson-Parris · Dino Sejdinovic -
2023 Poster: A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods »
Veit David Wild · Sahra Ghalebikesabi · Dino Sejdinovic · Jeremias Knoblauch -
2023 Poster: Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models »
Siu Lun Chau · Krikamol Muandet · Dino Sejdinovic -
2023 Poster: Squared Neural Families: A New Class of Tractable Density Models »
Russell Tsuchida · Cheng Soon Ong · Dino Sejdinovic -
2023 Oral: A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods »
Veit David Wild · Sahra Ghalebikesabi · Dino Sejdinovic · Jeremias Knoblauch -
2022 Poster: Giga-scale Kernel Matrix-Vector Multiplication on GPU »
Robert Hu · Siu Lun Chau · Dino Sejdinovic · Joan Glaunès -
2022 Poster: Explaining Preferences with Shapley Values »
Robert Hu · Siu Lun Chau · Jaime Ferrando Huertas · Dino Sejdinovic -
2022 Poster: Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning »
Veit David Wild · Robert Hu · Dino Sejdinovic -
2021 : Panel »
Mohammad Emtiyaz Khan · Atoosa Kasirzadeh · Anna Rogers · Javier González · Suresh Venkatasubramanian · Robert Williamson -
2021 Poster: Dynamic Causal Bayesian Optimization »
Virginia Aglietti · Neil Dhir · Javier González · Theodoros Damoulas -
2021 Poster: BayesIMP: Uncertainty Quantification for Causal Data Fusion »
Siu Lun Chau · Jean-Francois Ton · Javier González · Yee Teh · Dino Sejdinovic -
2021 Poster: Deconditional Downscaling with Gaussian Processes »
Siu Lun Chau · Shahine Bouabid · Dino Sejdinovic -
2020 Poster: BOSS: Bayesian Optimization over String Spaces »
Henry Moss · David Leslie · Daniel Beck · Javier González · Paul Rayson -
2020 Poster: Multi-task Causal Learning with Gaussian Processes »
Virginia Aglietti · Theodoros Damoulas · Mauricio Álvarez · Javier González -
2020 Spotlight: BOSS: Bayesian Optimization over String Spaces »
Henry Moss · David Leslie · Daniel Beck · Javier González · Paul Rayson -
2019 Poster: Meta-Surrogate Benchmarking for Hyperparameter Optimization »
Aaron Klein · Zhenwen Dai · Frank Hutter · Neil Lawrence · Javier González