Timezone: »
At the heart of deep learning we aim to use neural networks as function approximators - training them to produce outputs from inputs in emulation of a ground truth function or data creation process. In many cases we only have access to input-output pairs from the ground truth, however it is becoming more common to have access to derivatives of the target output with respect to the input -- for example when the ground truth function is itself a neural network such as in network compression or distillation. Generally these target derivatives are not computed, or are ignored. This paper introduces Sobolev Training for neural networks, which is a method for incorporating these target derivatives in addition the to target values while training. By optimising neural networks to not only approximate the function’s outputs but also the function’s derivatives we encode additional information about the target function within the parameters of the neural network. Thereby we can improve the quality of our predictors, as well as the data-efficiency and generalization capabilities of our learned function approximation. We provide theoretical justifications for such an approach as well as examples of empirical evidence on three distinct domains: regression on classical optimisation datasets, distilling policies of an agent playing Atari, and on large-scale applications of synthetic gradients. In all three domains the use of Sobolev Training, employing target derivatives in addition to target values, results in models with higher accuracy and stronger generalisation.
Author Information
Wojciech Czarnecki (DeepMind)
Simon Osindero (DeepMind)
Max Jaderberg (DeepMind)
Grzegorz Swirszcz (DeepMind @ Google)
Razvan Pascanu (Google DeepMind)
More from the Same Authors
-
2021 : LiRo: Benchmark and leaderboard for Romanian language tasks »
Stefan Dumitrescu · Petru Rebeja · Beata Lorincz · Mihaela Gaman · Andrei Avram · Mihai Ilie · Andrei Pruteanu · Adriana Stan · Lorena Rosia · Cristina Iacobescu · Luciana Morogan · George Dima · Gabriel Marchidan · Traian Rebedea · Madalina Chitez · Dani Yogatama · Sebastian Ruder · Radu Tudor Ionescu · Razvan Pascanu · Viorica Patraucean -
2022 : Pre-training via Denoising for Molecular Property Prediction »
Sheheryar Zaidi · Michael Schaarschmidt · James Martens · Hyunjik Kim · Yee Whye Teh · Alvaro Sanchez Gonzalez · Peter Battaglia · Razvan Pascanu · Jonathan Godwin -
2022 : When Does Re-initialization Work? »
Sheheryar Zaidi · Tudor Berariu · Hyunjik Kim · Jorg Bornschein · Claudia Clopath · Yee Whye Teh · Razvan Pascanu -
2022 Poster: Disentangling Transfer in Continual Reinforcement Learning »
Maciej Wolczyk · Michał Zając · Razvan Pascanu · Łukasz Kuciński · Piotr Miłoś -
2021 Poster: Powerpropagation: A sparsity inducing weight reparameterisation »
Jonathan Richard Schwarz · Siddhant Jayakumar · Razvan Pascanu · Peter E Latham · Yee Teh -
2021 Poster: Continual World: A Robotic Benchmark For Continual Reinforcement Learning »
Maciej Wołczyk · Michał Zając · Razvan Pascanu · Łukasz Kuciński · Piotr Miłoś -
2021 Poster: On the Role of Optimization in Double Descent: A Least Squares Study »
Ilja Kuzborskij · Csaba Szepesvari · Omar Rivasplata · Amal Rannen-Triki · Razvan Pascanu -
2020 Workshop: Beyond BackPropagation: Novel Ideas for Training Neural Architectures »
Mateusz Malinowski · Grzegorz Swirszcz · Viorica Patraucean · Marco Gori · Yanping Huang · Sindy Löwe · Anna Choromanska -
2020 : Live Intro »
Mateusz Malinowski · Viorica Patraucean · Grzegorz Swirszcz · Sindy Löwe · Anna Choromanska · Marco Gori · Yanping Huang -
2020 Poster: Top-KAST: Top-K Always Sparse Training »
Siddhant Jayakumar · Razvan Pascanu · Jack Rae · Simon Osindero · Erich Elsen -
2020 Poster: Discovering Reinforcement Learning Algorithms »
Junhyuk Oh · Matteo Hessel · Wojciech Czarnecki · Zhongwen Xu · Hado van Hasselt · Satinder Singh · David Silver -
2020 Poster: Pointer Graph Networks »
Petar Veličković · Lars Buesing · Matthew Overlan · Razvan Pascanu · Oriol Vinyals · Charles Blundell -
2020 Spotlight: Pointer Graph Networks »
Petar Veličković · Lars Buesing · Matthew Overlan · Razvan Pascanu · Oriol Vinyals · Charles Blundell -
2020 Tutorial: (Track3) Designing Learning Dynamics Q&A »
Marta Garnelo · David Balduzzi · Wojciech Czarnecki -
2020 Poster: Understanding the Role of Training Regimes in Continual Learning »
Seyed Iman Mirzadeh · Mehrdad Farajtabar · Razvan Pascanu · Hassan Ghasemzadeh -
2020 Poster: Real World Games Look Like Spinning Tops »
Wojciech Czarnecki · Gauthier Gidel · Brendan Tracey · Karl Tuyls · Shayegan Omidshafiei · David Balduzzi · Max Jaderberg -
2020 Tutorial: (Track3) Designing Learning Dynamics »
Marta Garnelo · David Balduzzi · Wojciech Czarnecki -
2019 Poster: Continual Unsupervised Representation Learning »
Dushyant Rao · Francesco Visin · Andrei A Rusu · Razvan Pascanu · Yee Whye Teh · Raia Hadsell -
2018 : Introduction of the workshop »
Razvan Pascanu · Yee Teh · Mark Ring · Marc Pickett -
2018 Workshop: Continual Learning »
Razvan Pascanu · Yee Teh · Marc Pickett · Mark Ring -
2018 Poster: Relational recurrent neural networks »
Adam Santoro · Ryan Faulkner · David Raposo · Jack Rae · Mike Chrzanowski · Theophane Weber · Daan Wierstra · Oriol Vinyals · Razvan Pascanu · Timothy Lillicrap -
2017 Poster: Distral: Robust multitask reinforcement learning »
Yee Teh · Victor Bapst · Wojciech Czarnecki · John Quan · James Kirkpatrick · Raia Hadsell · Nicolas Heess · Razvan Pascanu -
2017 Poster: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Poster: Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière · Theophane Weber · David Reichert · Lars Buesing · Arthur Guez · Danilo Jimenez Rezende · Adrià Puigdomènech Badia · Oriol Vinyals · Nicolas Heess · Yujia Li · Razvan Pascanu · Peter Battaglia · Demis Hassabis · David Silver · Daan Wierstra -
2017 Spotlight: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Oral: Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière · Theophane Weber · David Reichert · Lars Buesing · Arthur Guez · Danilo Jimenez Rezende · Adrià Puigdomènech Badia · Oriol Vinyals · Nicolas Heess · Yujia Li · Razvan Pascanu · Peter Battaglia · Demis Hassabis · David Silver · Daan Wierstra -
2017 Poster: Visual Interaction Networks: Learning a Physics Simulator from Video »
Nicholas Watters · Daniel Zoran · Theophane Weber · Peter Battaglia · Razvan Pascanu · Andrea Tacchetti -
2016 Workshop: Continual Learning and Deep Networks »
Razvan Pascanu · Mark Ring · Tom Schaul -
2016 Poster: Unsupervised Learning of 3D Structure from Images »
Danilo Jimenez Rezende · S. M. Ali Eslami · Shakir Mohamed · Peter Battaglia · Max Jaderberg · Nicolas Heess -
2016 Poster: Interaction Networks for Learning about Objects, Relations and Physics »
Peter Battaglia · Razvan Pascanu · Matthew Lai · Danilo Jimenez Rezende · koray kavukcuoglu -
2015 Poster: Natural Neural Networks »
Guillaume Desjardins · Karen Simonyan · Razvan Pascanu · koray kavukcuoglu -
2015 Poster: Spatial Transformer Networks »
Max Jaderberg · Karen Simonyan · Andrew Zisserman · koray kavukcuoglu -
2015 Spotlight: Spatial Transformer Networks »
Max Jaderberg · Karen Simonyan · Andrew Zisserman · koray kavukcuoglu -
2014 Poster: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio -
2014 Poster: On the Number of Linear Regions of Deep Neural Networks »
Guido F Montufar · Razvan Pascanu · Kyunghyun Cho · Yoshua Bengio