Timezone: »
Distributed Gaussian process (DGP) is a popular approach to scale GP to big data which divides the training data into some subsets, performs local inference for each partition, and aggregates the results to acquire global prediction. To combine the local predictions, the conditional independence assumption is used which basically means there is a perfect diversity between the subsets. Although it keeps the aggregation tractable, it is often violated in practice and generally yields poor results. In this paper, we propose a novel approach for aggregating the Gaussian experts' predictions by Gaussian graphical model (GGM) where the target aggregation is defined as an unobserved latent variable and the local predictions are the observed variables. We first estimate the joint distribution of latent and observed variables using the Expectation-Maximization (EM) algorithm. The interaction between experts can be encoded by the precision matrix of the joint distribution and the aggregated predictions are obtained based on the property of conditional Gaussian distribution. Using both synthetic and real datasets, our experimental evaluations illustrate that our new method outperforms other state-of-the-art DGP approaches.
Author Information
Hamed Jalali (University of Tuebingen)
Gjergji Kasneci (University of Tuebingen)
More from the Same Authors
-
2021 : CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms »
Martin Pawelczyk · Sascha Bielawski · Johan Van den Heuvel · Tobias Richter · Gjergji Kasneci -
2021 : A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji Kasneci -
2022 : Expert Selection in Distributed Gaussian Processes: A Multi-label Classification Approach »
Hamed Jalali · Gjergji Kasneci -
2022 : I Prefer not to Say – Operationalizing Fair and User-guided Data Minimization »
Tobias Leemann · Martin Pawelczyk · Christian Eberle · Gjergji Kasneci -
2022 : Explanation Shift: Detecting distribution shifts on tabular data via the explanation space »
Carlos Mougan · Klaus Broelemann · Gjergji Kasneci · Thanassis Tiropanis · Steffen Staab -
2022 : On the Trade-Off between Actionable Explanations and the Right to be Forgotten »
Martin Pawelczyk · Tobias Leemann · Asia Biega · Gjergji Kasneci -
2021 : [S4] A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji Kasneci -
2021 : Poster Session 1 (gather.town) »
Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani