Timezone: »
http://www.probabilistic-numerics.org/meetings/NIPS2016/
Optimization problems in machine learning have aspects that make them more challenging than the traditional settings, like stochasticity, and parameters with side-effects (e.g., the batch size and structure). The field has invented many different approaches to deal with these demands. Unfortunately - and intriguingly - this extra functionality seems to invariably necessitate the introduction of tuning parameters: step sizes, decay rates, cycle lengths, batch sampling distributions, and so on. Such parameters are not present, or at least not as prominent, in classic optimization methods. But getting them right is frequently crucial, and necessitates inconvenient human “babysitting”.
Recent work has increasingly tried to eliminate such fiddle factors, typically by statistical estimation. This also includes automatic selection of external parameters like the batch-size or -structure, which have not traditionally been treated as part of the optimization task. Several different strategies have now been proposed, but they are not always compatible with each other, and lack a common framework that would foster both conceptual and algorithmic interoperability. This workshop aims to provide a forum for the nascent community studying automating parameter-tuning in optimization routines.
Among the questions to be addressed by the workshop are:
* Is the prominence of tuning parameters a fundamental feature of stochastic optimization problems? Why do classic optimization methods manage to do well with virtually no free parameters?
* In which precise sense can the "optimization of optimization algorithms" be phrased as an inference / learning problem?
* Should, and can, parameters be inferred at design-time (by a human), at compile-time (by an external compiler with access to a meta-description of the problem) or run-time (by the algorithm itself)?
* What are generic ways to learn parameters of algorithms, and inherent difficulties for doing so? Is the goal to specialize to a particular problem, or to generalize over many problems?
In addition to the invited and already confirmed speakers, we will also invite contributed work from the community. Topics of interest include, but are not strictly limited to,
* Parameter adaptation for optimization algorithms
* Stochastic optimization methods
* Optimization methods adapted for specific applications
* Batch selection methods
* Convergence diagnostics for optimization algorithms
Sat 12:00 a.m. - 12:10 a.m.
|
opening remarks and introduction
(
opening
)
|
🔗 |
Sat 12:10 a.m. - 12:30 a.m.
|
Matt Hoffman (DeepMind)
(
invited talk
)
|
🔗 |
Sat 12:30 a.m. - 1:00 a.m.
|
David Duvenaud (U of Toronto)
(
invited talk
)
|
🔗 |
Sat 1:00 a.m. - 1:30 a.m.
|
Stephen J Wright (U of Wisconsin)
(
invited talk
)
|
🔗 |
Sat 1:30 a.m. - 2:00 a.m.
|
*coffee break*
|
🔗 |
Sat 2:00 a.m. - 2:30 a.m.
|
Samantha Hansen (Spotify)
(
invited talk
)
|
🔗 |
Sat 2:30 a.m. - 3:00 a.m.
|
spotlights
(
contributed talks
)
|
🔗 |
Sat 3:00 a.m. - 3:45 a.m.
|
poster session
|
🔗 |
Sat 3:45 a.m. - 5:15 a.m.
|
*lunch break*
|
🔗 |
Sat 5:15 a.m. - 5:40 a.m.
|
Matteo Pirotta (Politecnico di Milano)
(
contributed talk
)
|
🔗 |
Sat 5:40 a.m. - 6:00 a.m.
|
Ameet Talwalker (UCLA)
(
invited talk
)
|
🔗 |
Sat 6:00 a.m. - 6:30 a.m.
|
*coffe break*
|
🔗 |
Sat 6:30 a.m. - 6:50 a.m.
|
Ali Rahimi (Google)
(
invited talk
)
|
🔗 |
Sat 6:50 a.m. - 7:20 a.m.
|
Mark Schmidt (UBC)
(
invited talk
)
|
🔗 |
Sat 7:20 a.m. - 8:00 a.m.
|
*panel discussion*
(
panel discussion
)
|
🔗 |
Author Information
Maren Mahsereci (MPI for Intelligent Systems Tübingen)
Alex Davies (DeepMind)
Philipp Hennig (University of Tübingen and MPI Tübingen)
More from the Same Authors
-
2021 Spotlight: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 : Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning »
Runa Eschenhagen · Erik Daxberger · Philipp Hennig · Agustinus Kristiadi -
2021 : Being a Bit Frequentist Improves Bayesian Neural Networks »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 Poster: Laplace Redux - Effortless Bayesian Deep Learning »
Erik Daxberger · Agustinus Kristiadi · Alexander Immer · Runa Eschenhagen · Matthias Bauer · Philipp Hennig -
2021 Poster: A Probabilistic State Space Model for Joint Inference from Differential Equations and Data »
Jonathan Schmidt · Nicholas Krämer · Philipp Hennig -
2021 Poster: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 Poster: Linear-Time Probabilistic Solution of Boundary Value Problems »
Nicholas Krämer · Philipp Hennig -
2021 Poster: Cockpit: A Practical Debugging Tool for the Training of Deep Neural Networks »
Frank Schneider · Felix Dangel · Philipp Hennig -
2015 Workshop: Probabilistic Integration »
Michael A Osborne · Philipp Hennig -
2015 Poster: Probabilistic Line Searches for Stochastic Optimization »
Maren Mahsereci · Philipp Hennig -
2015 Oral: Probabilistic Line Searches for Stochastic Optimization »
Maren Mahsereci · Philipp Hennig -
2014 Poster: Incremental Local Gaussian Regression »
Franziska Meier · Philipp Hennig · Stefan Schaal -
2014 Poster: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2014 Poster: Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature »
Tom Gunter · Michael A Osborne · Roman Garnett · Philipp Hennig · Stephen J Roberts -
2014 Oral: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2013 Workshop: Bayesian Optimization in Theory and Practice »
Matthew Hoffman · Jasper Snoek · Nando de Freitas · Michael A Osborne · Ryan Adams · Sebastien Bubeck · Philipp Hennig · Remi Munos · Andreas Krause -
2013 Poster: The Randomized Dependence Coefficient »
David Lopez-Paz · Philipp Hennig · Bernhard Schölkopf -
2012 Workshop: Probabilistic Numerics »
Philipp Hennig · John P Cunningham · Michael A Osborne -
2011 Poster: Optimal Reinforcement Learning for Gaussian Systems »
Philipp Hennig