Skip to yearly menu bar Skip to main content


Poster

Neural Conditional Probability for Inference

Vladimir Kostic · GrĂ©goire Pacreau · Giacomo Turri · Pietro Novelli · Karim Lounici · Massimiliano Pontil

East Exhibit Hall A-C #4007
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

We introduce NCP (Neural Conditional Probability), a novel operator-theoretic approach for learning conditional distributions with a particular focus on inference tasks. NCP can be used to build conditional confidence regions and extract important statistics like conditional quantiles, mean, and covariance. It offers streamlined learning through a single unconditional training phase, facilitating efficient inference without the need for retraining even when conditioning changes. By tapping into the powerful approximation capabilities of neural networks, our method efficiently handles a wide variety of complex probability distributions, effectively dealing with nonlinear relationships between input and output variables. Theoretical guarantees ensure both optimization consistency and statistical accuracy of the NCP method.Our experiments show that our approach matches or beats leading methods using a simple Multi-Layer Perceptron (MLP) with two hidden layers and GELU activations. This demonstrates that a minimalistic architecture with a theoretically grounded loss function can achieve competitive results without sacrificing performance, even in the face of more complex architectures.

Live content is unavailable. Log in and register to view live content