NIPS 2009
Skip to yearly menu bar Skip to main content


Workshop

Approximate Learning of Large Scale Graphical Models

Russ Salakhutdinov · Amir Globerson · David Sontag

Hilton: Mt. Currie North

Undirected graphical models provide a powerful framework for representing dependency structure between random variables. Learning the parameters of undirected models plays a crucial role in solving key problems in many machine learning applications, including natural language processing, visual object recognition, speech perception, information retrieval, computational biology, and many others.

Learning in undirected graphical models of large treewidth is difficult because of the hard inference problem induced by the partition function for maximum likelihood learning, or by finding the MAP assignment for margin-based loss functions. Over the last decade, there has been considerable progress in developing algorithms for approximating the partition function and MAP assignment, both via variational approaches (e.g., belief propagation) and sampling algorithms (e.g., MCMC). More recently, researchers have begun to apply these methods to learning large, densely-connected undirected graphical models that may contain millions of parameters. A notable example is the learning of Deep Belief Networks and Deep Boltzmann Machines, that employ MCMC strategy to greedily learn deep hierarchical models.

Live content is unavailable. Log in and register to view live content