Timezone: »

Iterative Refinement of the Approximate Posterior for Directed Belief Networks
R Devon Hjelm · Russ Salakhutdinov · Kyunghyun Cho · Nebojsa Jojic · Vince Calhoun · Junyoung Chung

Tue Dec 06 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #5

Variational methods that rely on a recognition network to approximate the posterior of directed graphical models offer better inference and learning than previous methods. Recent advances that exploit the capacity and flexibility in this approach have expanded what kinds of models can be trained. However, as a proposal for the posterior, the capacity of the recognition network is limited, which can constrain the representational power of the generative model and increase the variance of Monte Carlo estimates. To address these issues, we introduce an iterative refinement procedure for improving the approximate posterior of the recognition network and show that training with the refined posterior is competitive with state-of-the-art methods. The advantages of refinement are further evident in an increased effective sample size, which implies a lower variance of gradient estimates.

Author Information

R Devon Hjelm (University of New Mexico)
Russ Salakhutdinov (University of Toronto)
Kyunghyun Cho (University of Montreal)

Kyunghyun Cho is an associate professor of computer science and data science at New York University and a research scientist at Facebook AI Research. He was a postdoctoral fellow at the Université de Montréal until summer 2015 under the supervision of Prof. Yoshua Bengio, and received PhD and MSc degrees from Aalto University early 2014 under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.

Nebojsa Jojic (Microsoft Research)
Vince Calhoun (Mind Research Network)
Junyoung Chung (University of Montreal)

More from the Same Authors