Skip to yearly menu bar Skip to main content


Composing graphical models with neural networks for structured representations and fast inference

Matthew Johnson · David Duvenaud · Alex Wiltschko · Ryan Adams · Sandeep R Datta

Area 5+6+7+8 #57

Keywords: [ Graphical Models ] [ Nonlinear Dimension Reduction and Manifold Learning ] [ Variational Inference ] [ (Other) Probabilistic Models and Methods ] [ (Other) Unsupervised Learning Methods ] [ Deep Learning or Neural Networks ]


We propose a general modeling and inference framework that combines the complementary strengths of probabilistic graphical models and deep learning methods. Our model family composes latent graphical models with neural network observation likelihoods. For inference, we use recognition networks to produce local evidence potentials, then combine them with the model distribution using efficient message-passing algorithms. All components are trained simultaneously with a single stochastic variational inference objective. We illustrate this framework by automatically segmenting and categorizing mouse behavior from raw depth video, and demonstrate several other example models.

Live content is unavailable. Log in and register to view live content