Timezone: »
By distributing the training process, local approximation reduces the cost of the standard Gaussian Process. An ensemble technique combines local predictions from Gaussian experts trained on different partitions of the data by assuming a perfect diversity of local predictors. Although it keeps the aggregation tractable, this assumption is often violated in practice. Taking dependencies between experts enables ensemble methods to provide consistent results. However, they have a high computational cost, which is cubic in the number of experts involved. By implementing an expert selection strategy, the final aggregation step uses fewer experts and is more efficient. Indeed, a static selection approach that assigns a fixed set of experts to each new data point cannot encode the specific properties of each unique data point. This paper proposes a flexible expert selection approach based on the characteristics of entry data points. To this end, we investigate the selection task as a multi-label classification problem where the experts define labels, and each entry point is assigned to some experts. The proposed solution's prediction quality, efficiency, and asymptotic properties are discussed in detail. We demonstrate the efficacy of our method through extensive numerical experiments using synthetic and real-world data sets.
Author Information
Hamed Jalali (University of Tuebingen)
Gjergji Kasneci (University of Tuebingen)
More from the Same Authors
-
2021 : CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms »
Martin Pawelczyk · Sascha Bielawski · Johan Van den Heuvel · Tobias Richter · Gjergji Kasneci -
2021 : A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji Kasneci -
2021 : Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes »
Hamed Jalali · Gjergji Kasneci -
2022 : I Prefer not to Say – Operationalizing Fair and User-guided Data Minimization »
Tobias Leemann · Martin Pawelczyk · Christian Eberle · Gjergji Kasneci -
2022 : Explanation Shift: Detecting distribution shifts on tabular data via the explanation space »
Carlos Mougan · Klaus Broelemann · Gjergji Kasneci · Thanassis Tiropanis · Steffen Staab -
2022 : On the Trade-Off between Actionable Explanations and the Right to be Forgotten »
Martin Pawelczyk · Tobias Leemann · Asia Biega · Gjergji Kasneci -
2021 : [S4] A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji Kasneci -
2021 : Poster Session 1 (gather.town) »
Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani