Timezone: »
Gaussian process (GP) is a non-parametric model capitalizing probabilistic inference and can be used for Bayesian optimization. It is a powerful function approximator. Compared to neural networks which is a ‘black box’ function approximator, GP can perform accurate probabilistic inference and quantify the uncertainty of prediction, so it is widely used in classification and regression tasks in the field of machine learning. However, GP usually has a large computational complexity, which limits its application to large-scale datasets. Besides, most of the current GP-based approaches only focus on single-output regression, i.e., the dependent variable is univariate, and do not easily extend to multi-output regression tasks.In order to tackle the above issues, we propose an expert-based approximation method to develop a learnable model that can be applied to large-scale datasets and perform multi-output regression tasks. We propose the multi-output mixture of Gaussian processes (MOMoGP), which employs a deeply structured mixture of single-output GPs encoded via a Probabilistic Circuit. This allows one to accurately capture correlations between multiple outputs without introducing a cubic cost in the number of output dimensions. By recursively partitioning the covariate space and the output space, posterior inference in our model reduces to inference on single-output GP experts, which only need to be conditioned on a small subset of the observations.
Author Information
Mingye Zhu (University of Science and Technology of China)
Zhongjie Yu (TU Darmstadt)
Martin Trapp (Aalto University)
Arseny Skryagin (TU Darmstadt, AIML Lab)
Kristian Kersting (TU Darmstadt)
More from the Same Authors
-
2023 Poster: Do Not Marginalize Mechanisms, Rather Consolidate! »
Moritz Willig · Matej Zečević · Devendra Dhami · Kristian Kersting -
2023 Poster: Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction »
Quentin Delfosse · Hikaru Shindo · Devendra Dhami · Kristian Kersting -
2023 Poster: ATMAN: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation »
Björn Deiseroth · Mayukh Deb · Samuel Weinbach · Manuel Brack · Patrick Schramowski · Kristian Kersting -
2023 Poster: SEGA: Instructing Text-to-Image Models using Semantic Guidance »
Manuel Brack · Felix Friedrich · Dominik Hintersdorf · Lukas Struppek · Patrick Schramowski · Kristian Kersting -
2023 Poster: MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation »
Marco Bellagente · Hannah Teufel · Manuel Brack · Björn Deiseroth · Felix Friedrich · Constantin Eichenberg · Andrew Dai · Robert Baldock · Souradeep Nanda · Koen Oostermeijer · Andres Felipe Cruz-Salinas · Patrick Schramowski · Kristian Kersting · Samuel Weinbach -
2023 Poster: Characteristic Circuit »
Zhongjie Yu · Martin Trapp · Kristian Kersting -
2023 Oral: Characteristic Circuit »
Zhongjie Yu · Martin Trapp · Kristian Kersting -
2022 : Panel »
Guy Van den Broeck · Cassio de Campos · Denis Maua · Kristian Kersting · Rianne van den Berg -
2021 Poster: Periodic Activation Functions Induce Stationarity »
Lassi Meronen · Martin Trapp · Arno Solin -
2021 Poster: Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models »
Matej Zečević · Devendra Dhami · Athresh Karanam · Sriraam Natarajan · Kristian Kersting