Timezone: »
Probabilistic inference algorithms such as Sequential Monte Carlo (SMC) provide powerful tools for constraining procedural models in computer graphics, but they require many samples to produce desirable results. In this paper, we show how to create procedural models which learn how to satisfy constraints. We augment procedural models with neural networks which control how the model makes random choices based on the output it has generated thus far. We call such models neurally-guided procedural models. As a pre-computation, we train these models to maximize the likelihood of example outputs generated via SMC. They are then used as efficient SMC importance samplers, generating high-quality results with very few samples. We evaluate our method on L-system-like models with image-based constraints. Given a desired quality threshold, neurally-guided models can generate satisfactory results up to 10x faster than unguided models.
Author Information
Daniel Ritchie (Stanford University)
Anna Thomas (Stanford University)
Pat Hanrahan (Stanford University)
Noah Goodman (Stanford University)
More from the Same Authors
-
2021 : DABS: a Domain-Agnostic Benchmark for Self-Supervised Learning »
Alex Tamkin · Vincent Liu · Rongfei Lu · Daniel Fein · Colin Schultz · Noah Goodman -
2021 : Learning to solve complex tasks by growing knowledge culturally across generations »
Michael Tessler · Jason Madeano · Pedro Tsividis · Noah Goodman · Josh Tenenbaum -
2021 : Spotlight Talk: Learning to solve complex tasks by growing knowledge culturally across generations »
Noah Goodman · Josh Tenenbaum · Michael Tessler · Jason Madeano -
2021 : Multi-party referential communication in complex strategic games »
Jessica Mankewitz · Veronica Boyce · Brandon Waldon · Georgia Loukatou · Dhara Yu · Jesse Mu · Noah Goodman · Michael C Frank -
2021 Workshop: Meaning in Context: Pragmatic Communication in Humans and Machines »
Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman -
2021 : Opening remarks »
Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman -
2021 Poster: Emergent Communication of Generalizations »
Jesse Mu · Noah Goodman -
2021 Poster: Contrastive Reinforcement Learning of Symbolic Reasoning Domains »
Gabriel Poesia · WenXin Dong · Noah Goodman -
2021 Poster: Improving Compositionality of Neural Networks by Decoding Representations to Inputs »
Mike Wu · Noah Goodman · Stefano Ermon -
2021 Panel: The Consequences of Massive Scaling in Machine Learning »
Noah Goodman · Melanie Mitchell · Joelle Pineau · Oriol Vinyals · Jared Kaplan -
2020 Poster: Language Through a Prism: A Spectral Approach for Multiscale Language Representations »
Alex Tamkin · Dan Jurafsky · Noah Goodman -
2019 Poster: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2019 Spotlight: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2018 Poster: Bias and Generalization in Deep Generative Models: An Empirical Study »
Shengjia Zhao · Hongyu Ren · Arianna Yuan · Jiaming Song · Noah Goodman · Stefano Ermon -
2018 Spotlight: Bias and Generalization in Deep Generative Models: An Empirical Study »
Shengjia Zhao · Hongyu Ren · Arianna Yuan · Jiaming Song · Noah Goodman · Stefano Ermon -
2018 Poster: Multimodal Generative Models for Scalable Weakly-Supervised Learning »
Mike Wu · Noah Goodman -
2018 Poster: Learning Compressed Transforms with Low Displacement Rank »
Anna Thomas · Albert Gu · Tri Dao · Atri Rudra · Christopher Ré -
2017 : Morning panel discussion »
Jürgen Schmidhuber · Noah Goodman · Anca Dragan · Pushmeet Kohli · Dhruv Batra -
2017 : "Language in context" »
Noah Goodman -
2017 Poster: Learning Disentangled Representations with Semi-Supervised Deep Generative Models »
Siddharth Narayanaswamy · Brooks Paige · Jan-Willem van de Meent · Alban Desmaison · Noah Goodman · Pushmeet Kohli · Frank Wood · Philip Torr -
2015 Workshop: Bounded Optimality and Rational Metareasoning »
Samuel J Gershman · Falk Lieder · Tom Griffiths · Noah Goodman -
2013 Poster: Learning and using language via recursive pragmatic reasoning about other agents »
Nathaniel J Smith · Noah Goodman · Michael C Frank -
2013 Poster: Learning Stochastic Inverses »
Andreas Stuhlmüller · Jacob Taylor · Noah Goodman -
2012 Workshop: Probabilistic Programming: Foundations and Applications (2 day) »
Vikash Mansinghka · Daniel Roy · Noah Goodman -
2012 Workshop: Probabilistic Programming: Foundations and Applications (2 day) »
Vikash Mansinghka · Daniel Roy · Noah Goodman -
2012 Poster: Burn-in, bias, and the rationality of anchoring »
Falk Lieder · Tom Griffiths · Noah Goodman -
2011 Poster: Nonstandard Interpretations of Probabilistic Programs for Efficient Inference »
David Wingate · Noah Goodman · Andreas Stuhlmueller · Jeffrey Siskind