Timezone: »

Neural Abstract Machines & Program Induction
Matko Bošnjak · Nando de Freitas · Tejas Kulkarni · Arvind Neelakantan · Scott E Reed · Sebastian Riedel · Tim Rocktäschel

Fri Dec 09 11:00 PM -- 09:30 AM (PST) @ Room 113
Event URL: https://uclmr.github.io/nampi/ »

Machine intelligence capable of learning complex procedural behavior, inducing (latent) programs, and reasoning with these programs is a key to solving artificial intelligence. The problems of learning procedural behavior and program induction have been studied from different perspectives in many computer science fields such as program synthesis, probabilistic programming, inductive logic programming, reinforcement learning, and recently in deep learning. However, despite the common goal, there seems to be little communication and collaboration between the different fields focused on this problem.

Recently, there have been a lot of success stories in the deep learning community related to learning neural networks capable of using trainable memory abstractions. This has led to the development of neural networks with differentiable data structures such as Neural Turing Machines, Memory Networks, Neural Stacks, and Hierarchical Attentive Memory, among others. Simultaneously, neural program induction models like Neural Program Interpreters and Neural Programmer have created a lot of excitement in the field, promising induction of algorithmic behavior, and enabling inclusion of programming languages in the processes of execution and induction, while staying end-to-end trainable. Trainable program induction models have the potential to make a substantial impact in many problems involving long-term memory, reasoning, and procedural execution, such as question answering, dialog, and robotics.

The aim of the NAMPI workshop is to bring researchers and practitioners from both academia and industry, in the areas of deep learning, program synthesis, probabilistic programming, inductive programming and reinforcement learning, together to exchange ideas on the future of program induction with a special focus on neural network models and abstract machines. Through this workshop we look to identify common challenges, exchange ideas among and lessons learned from the different fields, as well as establish a (set of) standard evaluation benchmark(s) for approaches that learn with abstraction and/or reason with induced programs.

Areas of interest for discussion and submissions include, but are not limited to (in alphabetical order):
- Applications
- Compositionality in Representation Learning
- Differentiable Memory
- Differentiable Data Structures
- Function and (sub-)Program Compositionality
- Inductive Logic Programming
- Knowledge Representation in Neural Abstract Structures
- Large-scale Program Induction
- Meta-Learning and Self-improving
- Neural Abstract Machines
- Program Induction: Datasets, Tasks, and Evaluation
- Program Synthesis
- Probabilistic Programming
- Reinforcement Learning for Program Induction
- Semantic Parsing

Fri 11:50 p.m. - 12:00 a.m.
Sat 12:00 a.m. - 12:30 a.m.
Stephen Muggleton - What use is Abstraction in Deep Program Induction? (Session)
Sat 12:30 a.m. - 1:00 a.m.
Daniel Tarlow - In Search of Strong Generalization: Building Structured Models in the Age of Neural Networks (Session)
Sat 1:00 a.m. - 1:30 a.m.
Charles Sutton - Learning Program Representation: Symbols to Semantics (Session)
Sat 1:30 a.m. - 2:00 a.m.
Coffee Break (Break)
Sat 2:00 a.m. - 2:30 a.m.
Doina Precup - From temporal abstraction to programs (Session)
Sat 2:30 a.m. - 3:00 a.m.
Rob Fergus - Learning to Compose by Delegation (Session)
Sat 3:00 a.m. - 3:30 a.m.
Percy Liang - How Can We Write Large Programs without Thinking? (Session)
Sat 3:30 a.m. - 5:00 a.m.
Lunch (Break)
Sat 5:00 a.m. - 5:30 a.m.
Martin Vechev - Program Synthesis and Machine Learning (Session)
Sat 5:30 a.m. - 6:00 a.m.
Ed Grefenstette - Limitations of RNNs: a computational perspective (Session)
Sat 6:00 a.m. - 7:00 a.m.
Coffee Break & Poster Session (Break & Poster session)
Sat 7:00 a.m. - 7:30 a.m.
Jürgen Schmidhuber - Learning how to Learn Learning Algorithms: Recursive Self-Improvement (Session)
Sat 7:30 a.m. - 8:00 a.m.
Joshua Tenenbaum & Kevin Ellis - Bayesian program learning: Prospects for building more human-like AI systems (Session)
Sat 8:00 a.m. - 8:30 a.m.
Alex Graves - Learning When to Halt With Adaptive Computation Time (Session)
Sat 8:30 a.m. - 9:30 a.m.
Debate with Percy Liang, Jürgen Schmidhuber, Joshua Tenenbaum and Martin Vechev (Discussion Panel)
Sat 9:30 a.m. - 9:40 a.m.
Closing word

Author Information

Matko Bošnjak (University College London)
Nando de Freitas (DeepMind)
Tejas Kulkarni (DeepMind)
Arvind Neelakantan (University of Massachusetts Amherst)
Scott E Reed (University of Michigan)
Sebastian Riedel (University College London)
Tim Rocktäschel (University of Oxford)

Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford. Tim obtained his Ph.D. in the Machine Reading group at University College London under the supervision of Sebastian Riedel. He received a Google Ph.D. Fellowship in Natural Language Processing in 2017 and a Microsoft Research Ph.D. Scholarship in 2013. In Summer 2015, he worked as a Research Intern at Google DeepMind. In 2012, he obtained his Diploma (equivalent to M.Sc) in Computer Science from the Humboldt-Universität zu Berlin. Between 2010 and 2012, he worked as Student Assistant and in 2013 as Research Assistant in the Knowledge Management in Bioinformatics group at Humboldt-Universität zu Berlin. Tim's research focuses on sample-efficient and interpretable machine learning models that learn from world, domain, and commonsense knowledge in symbolic and textual form. His work is at the intersection of deep learning, reinforcement learning, natural language processing, program synthesis, and formal logic.

More from the Same Authors