Timezone: »
Undirected graphical models are compact representations of joint probability distributions over random variables. To solve inference tasks of interest, graphical models of arbitrary topology can be trained using empirical risk minimization. However, to solve inference tasks that were not seen during training, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs). The ensemble is optimized to generate data within the GAN framework, and inference is performed using a finite subset of these models. AGMs perform comparably with EGMs on inference tasks that the latter were specifically optimized for. Most importantly, AGMs show significantly better generalization to unseen inference tasks compared to EGMs, as well as deep neural architectures like GibbsNet and VAEAC which allow arbitrary conditioning. Finally, AGMs allow fast data sampling, competitive with Gibbs sampling from EGMs.
Author Information
Adarsh Keshav Jeewajee (Massachusetts Institute of Technology)
Leslie Kaelbling (MIT)
More from the Same Authors
-
2020 : Robotic gripper design with Evolutionary Strategies and Graph Element Networks »
Ferran Alet · Maria Bauza · Adarsh K Jeewajee · Max Thomsen · Alberto Rodriguez · Leslie Kaelbling · Tomás Lozano-Pérez -
2022 : Solving PDDL Planning Problems with Pretrained Large Language Models »
Tom Silver · Varun Hariprasad · Reece Shuttleworth · Nishanth Kumar · Tomás Lozano-Pérez · Leslie Kaelbling -
2022 Poster: PDSketch: Integrated Domain Programming, Learning, and Planning »
Jiayuan Mao · Tomás Lozano-Pérez · Josh Tenenbaum · Leslie Kaelbling -
2021 Poster: Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization »
Clement Gehring · Kenji Kawaguchi · Jiaoyang Huang · Leslie Kaelbling -
2021 Poster: Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time »
Ferran Alet · Maria Bauza · Kenji Kawaguchi · Nurullah Giray Kuru · Tomás Lozano-Pérez · Leslie Kaelbling -
2020 : Doing for our robots what nature did for us »
Leslie Kaelbling -
2019 Poster: Neural Relational Inference with Fast Modular Meta-learning »
Ferran Alet · Erica Weng · Tomás Lozano-Pérez · Leslie Kaelbling -
2018 : Discussion Panel: Ryan Adams, Nicolas Heess, Leslie Kaelbling, Shie Mannor, Emo Todorov (moderator: Roy Fox) »
Ryan Adams · Nicolas Heess · Leslie Kaelbling · Shie Mannor · Emo Todorov · Roy Fox -
2018 : On the Value of Knowing What You Don't Know: Learning to Sample and Sampling to Learn for Robot Planning (Leslie Kaelbling) »
Leslie Kaelbling -
2018 : Leslie Kaelbling »
Leslie Kaelbling -
2018 Workshop: Infer to Control: Probabilistic Reinforcement Learning and Structured Control »
Leslie Kaelbling · Martin Riedmiller · Marc Toussaint · Igor Mordatch · Roy Fox · Tuomas Haarnoja -
2018 : Talk 8: Leslie Kaelbling - Learning models of very large hybrid domains »
Leslie Kaelbling -
2018 Poster: Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior »
Zi Wang · Beomjoon Kim · Leslie Kaelbling -
2018 Spotlight: Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior »
Zi Wang · Beomjoon Kim · Leslie Kaelbling -
2015 Poster: Bayesian Optimization with Exponential Convergence »
Kenji Kawaguchi · Leslie Kaelbling · Tomás Lozano-Pérez -
2008 Poster: Multi-Agent Filtering with Infinitely Nested Beliefs »
Luke Zettlemoyer · Brian Milch · Leslie Kaelbling -
2008 Spotlight: Multi-Agent Filtering with Infinitely Nested Beliefs »
Luke Zettlemoyer · Brian Milch · Leslie Kaelbling -
2007 Workshop: The Grammar of Vision: Probabilistic Grammar-Based Models for Visual Scene Understanding and Object Categorization »
Virginia Savova · Josh Tenenbaum · Leslie Kaelbling · Alan Yuille