Skip to yearly menu bar Skip to main content


Tutorial

Probabilistic Circuits: Representations, Inference, Learning and Applications

Antonio Vergari · YooJung Choi · Robert Peharz

Moderator s: Guy Van den Broeck · Jessica Schrouff

Virtual

Abstract:

In several real-world scenarios, decision making involves complex reasoning, i.e., the ability to answer complex probabilistic queries. Moreover, in many sensitive domains like health- care and economical decision making, the result of these queries is required to be exact as approximations without guarantees would make the decision making process brittle. In all these scenarios, tractable probabilistic inference and learning are becoming more and more mandatory. In this tutorial, we will introduce the framework of probabilistic circuits (PCs) under which one can learn deep generative models that guarantee exact inference in polynomial (often linear) time. After certain recent algorithmic and theoretical results, which we will discuss in this tutorial, PCs have achieved impressive results in probabilistic modeling, sometimes outperforming intractable models such as variational autoencoders. We will show the syntax and semantics of PCs and show how several commonly used ML models -- from Gaussian mixture models to HMMs and decision trees -- can be understood as computational graphs within the PC framework. We will discuss how PCs are special cases of neural networks, when restricting network with certain structural properties enables different tractability scenarios. This unified view of probabilistic ML models opens up a range of ways to learn PCs from data and use them in real-world applications. We will, in fact, provide a unifying view over several algorithms to learn both the structure and parameters of PCs and discuss modern approaches to scale them on GPU. Lastly, we will showcase several successful application scenarios where PCs have been employed as an alternative to or in conjunction with intractable models, including robust image classification, lossless compression, predictions in the presence of missing values, fairness certification, and scene understanding.

Chat is not available.
Schedule