Timezone: »
We present a neurosymbolic framework for the lifelong learning of algorithmic tasks that mix perception and procedural reasoning. Reusing high-level concepts across domains and learning complex procedures are key challenges in lifelong learning. We show that a program synthesis approach that combines gradient descent with combinatorial search over programs can be a more effective response to these challenges than purely neural methods. Our framework, called HOUDINI, represents neural networks as strongly typed, differentiable functional programs that use symbolic higher-order combinators to compose a library of neural functions. Our learning algorithm consists of: (1) a symbolic program synthesizer that performs a type-directed search over parameterized programs, and decides on the library functions to reuse, and the architectures to combine them, while learning a sequence of tasks; and (2) a neural module that trains these programs using stochastic gradient descent. We evaluate HOUDINI on three benchmarks that combine perception with the algorithmic tasks of counting, summing, and shortest-path computation. Our experiments show that HOUDINI transfers high-level concepts more effectively than traditional transfer learning and progressive neural networks, and that the typed representation of networks significantly accelerates the search.
Author Information
Lazar Valkov (University of Edinburgh)
Dipak Chaudhari (Rice University)
Akash Srivastava (University of Edinburgh)
Charles Sutton (Google)
Swarat Chaudhuri (Rice University)
More from the Same Authors
-
2021 Spotlight: Neural Program Generation Modulo Static Analysis »
Rohan Mukherjee · Yeming Wen · Dipak Chaudhari · Thomas Reps · Swarat Chaudhuri · Christopher Jermaine -
2021 Poster: A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics »
Kai Xu · Akash Srivastava · Dan Gutfreund · Felix Sosa · Tomer Ullman · Josh Tenenbaum · Charles Sutton -
2021 Poster: Targeted Neural Dynamical Modeling »
Cole Hurwitz · Akash Srivastava · Kai Xu · Justin Jude · Matthew Perich · Lee Miller · Matthias Hennig -
2021 Poster: Neural Program Generation Modulo Static Analysis »
Rohan Mukherjee · Yeming Wen · Dipak Chaudhari · Thomas Reps · Swarat Chaudhuri · Christopher Jermaine -
2019 Poster: Imitation-Projected Programmatic Reinforcement Learning »
Abhinav Verma · Hoang Le · Yisong Yue · Swarat Chaudhuri -
2018 : A simple transfer-learning extension of Hyperband »
Lazar Valkov -
2018 : Panel on research process »
Zachary Lipton · Charles Sutton · Finale Doshi-Velez · Hanna Wallach · Suchi Saria · Rich Caruana · Thomas Rainforth -
2018 : Charles Sutton »
Charles Sutton -
2017 Poster: VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning »
Akash Srivastava · Lazar Valkov · Chris Russell · Michael Gutmann · Charles Sutton -
2016 Workshop: Towards an Artificial Intelligence for Data Science »
Charles Sutton · James Geddes · Zoubin Ghahramani · Padhraic Smyth · Chris Williams -
2015 Poster: Latent Bayesian melding for integrating individual and population models »
Mingjun Zhong · Nigel Goddard · Charles Sutton -
2015 Spotlight: Latent Bayesian melding for integrating individual and population models »
Mingjun Zhong · Nigel Goddard · Charles Sutton -
2014 Poster: Semi-Separable Hamiltonian Monte Carlo for Inference in Bayesian Hierarchical Models »
Yichuan Zhang · Charles Sutton -
2014 Poster: Signal Aggregate Constraints in Additive Factorial HMMs, with Application to Energy Disaggregation »
Mingjun Zhong · Nigel Goddard · Charles Sutton -
2012 Poster: Continuous Relaxations for Discrete Hamiltonian Monte Carlo »
Zoubin Ghahramani · Yichuan Zhang · Charles Sutton · Amos Storkey -
2012 Spotlight: Continuous Relaxations for Discrete Hamiltonian Monte Carlo »
Zoubin Ghahramani · Yichuan Zhang · Charles Sutton · Amos Storkey -
2011 Poster: Quasi-Newton Methods for Markov Chain Monte Carlo »
Yichuan Zhang · Charles Sutton