Timezone: »
In nearly all machine learning tasks, decisions must be made given current knowledge (e.g., choose which label to predict). Perhaps surprisingly, always making the best decision is not always the best strategy, particularly while learning. Recently, there is an emerging body of work on learning under different rules that apply perturbations to the decision procedure. These works provide simple and efficient learning rules with improved theoretical guarantees. This workshop will bring together the growing community of researchers interested in different aspects of this area, and it will broaden our understanding of why and how perturbation methods can be useful.
Last year, at the highly successful NIPS workshop on Perturbations, Optimization, and Statistics, we looked at how injecting perturbations (whether it be random or adversarial “noise”) into learning and inference procedures can be beneficial. The focus was on two angles: first, on how stochastic perturbations can be used to construct new types of probability models for structured data; and second, how deterministic perturbations affect the regularization and the generalization properties of learning algorithms.
The goal of this workshop is to expand the scope of last year and also explore different ways to apply perturbations within optimization and statistics to enhance and improve machine learning approaches. This year, we would like to: (a) Look at exciting new developments related to the above core themes. (b) Emphasize their implications on topics that received less coverage last year, specifically highlighting connections to decision theory, risk analysis, game theory, and economics.
More generally, we shall specifically be interested in understanding the following issues:
* Repeated games and online learning: How to use random perturbations to explore unseens options in repeated games? How to exploit connections to Bayesian risk?
* Adversarial Uncertainty: How to play complex games with adversarial uncertainty? What are the computational qualities of such solutions, and do Nash-equilibria exists in these cases?
* Stochastic risk: How to average predictions with random perturbations to get improved generalization guarantees? How stochastic perturbations imply approximated Bayesian risk and regularization?
* Dropout: How stochastic dropout regularizes learning of complex models and what is its generalization power? What are the relationships between stochastic and adversarial dropouts?
* Robust optimization: In what ways can learning be improved by perturbing the input measurements?
* Choice theory: What is the best way to use perturbations to compensate lack of knowledge? What lessons in modeling can machine learning take from random utility theory?
* Theory: How does the maximum of a random process relate to its complexity? How can the maximum of random perturbations be used to measure the uncertainty of a system?
Author Information
Tamir Hazan (Technion)
George Papandreou (Toyota Technological Institute at Chicago)
Sasha Rakhlin (University of Pennsylvania)
Danny Tarlow (Google Brain)
More from the Same Authors
-
2021 Spotlight: PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · Pierre-Antoine Manzagol · Daniel Tarlow · Subhodeep Moitra -
2021 Spotlight: Learning Generalized Gumbel-max Causal Mechanisms »
Guy Lorberbom · Daniel D. Johnson · Chris Maddison · Daniel Tarlow · Tamir Hazan -
2022 Poster: On the Importance of Gradient Norm in PAC-Bayesian Bounds »
Itai Gat · Yossi Adi · Alex Schwing · Tamir Hazan -
2021 Workshop: Advances in Programming Languages and Neurosymbolic Systems (AIPLANS) »
Breandan Considine · Disha Shrivastava · David Yu-Tung Hui · Chin-Wei Huang · Shawn Tan · Xujie Si · Prakash Panangaden · Guy Van den Broeck · Daniel Tarlow -
2021 Poster: Structured Denoising Diffusion Models in Discrete State-Spaces »
Jacob Austin · Daniel D. Johnson · Jonathan Ho · Daniel Tarlow · Rianne van den Berg -
2021 Poster: Learning to Combine Per-Example Solutions for Neural Program Synthesis »
Disha Shrivastava · Hugo Larochelle · Daniel Tarlow -
2021 Poster: PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · Pierre-Antoine Manzagol · Daniel Tarlow · Subhodeep Moitra -
2021 Poster: Learning Generalized Gumbel-max Causal Mechanisms »
Guy Lorberbom · Daniel D. Johnson · Chris Maddison · Daniel Tarlow · Tamir Hazan -
2020 Poster: Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies »
Itai Gat · Idan Schwartz · Alex Schwing · Tamir Hazan -
2020 Poster: Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces »
Guy Lorberbom · Chris Maddison · Nicolas Heess · Tamir Hazan · Danny Tarlow -
2019 Poster: Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder »
Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan -
2017 Poster: High-Order Attention Models for Visual Question Answering »
Idan Schwartz · Alex Schwing · Tamir Hazan -
2016 Workshop: Time Series Workshop »
Oren Anava · Marco Cuturi · Azadeh Khaleghi · Vitaly Kuznetsov · Sasha Rakhlin -
2016 Poster: Constraints Based Convex Belief Propagation »
Yaniv Tenzer · Alex Schwing · Kevin Gimpel · Tamir Hazan -
2015 Workshop: Time Series Workshop »
Oren Anava · Azadeh Khaleghi · Vitaly Kuznetsov · Alexander Rakhlin -
2015 Poster: Adaptive Online Learning »
Dylan Foster · Alexander Rakhlin · Karthik Sridharan -
2015 Spotlight: Adaptive Online Learning »
Dylan Foster · Alexander Rakhlin · Karthik Sridharan -
2014 Workshop: Modern Nonparametrics 3: Automating the Learning Pipeline »
Eric Xing · Mladen Kolar · Arthur Gretton · Samory Kpotufe · Han Liu · Zoltán Szabó · Alan Yuille · Andrew G Wilson · Ryan Tibshirani · Sasha Rakhlin · Damian Kozbur · Bharath Sriperumbudur · David Lopez-Paz · Kirthevasan Kandasamy · Francesco Orabona · Andreas Damianou · Wacha Bounliphone · Yanshuai Cao · Arijit Das · Yingzhen Yang · Giulia DeSalvo · Dmitry Storcheus · Roberto Valerio -
2014 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Danny Tarlow -
2014 Poster: Just-In-Time Learning for Fast and Flexible Inference »
S. M. Ali Eslami · Danny Tarlow · Pushmeet Kohli · John Winn -
2014 Poster: A* Sampling »
Chris Maddison · Danny Tarlow · Tom Minka -
2014 Oral: A* Sampling »
Chris Maddison · Danny Tarlow · Tom Minka -
2013 Workshop: Learning Faster From Easy Data »
Peter Grünwald · Wouter M Koolen · Sasha Rakhlin · Nati Srebro · Alekh Agarwal · Karthik Sridharan · Tim van Erven · Sebastien Bubeck -
2013 Poster: Optimization, Learning, and Games with Predictable Sequences »
Sasha Rakhlin · Karthik Sridharan -
2013 Poster: Learning Efficient Random Maximum A-Posteriori Predictors with Non-Decomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola -
2013 Poster: Learning to Pass Expectation Propagation Messages »
Nicolas Heess · Danny Tarlow · John Winn -
2013 Poster: On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations »
Tamir Hazan · Subhransu Maji · Tommi Jaakkola -
2013 Poster: Online Learning of Dynamic Parameters in Social Networks »
Shahin Shahrampour · Sasha Rakhlin · Ali Jadbabaie -
2012 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Danny Tarlow -
2012 Poster: Bayesian n-Choose-k Models for Classification and Ranking »
Kevin Swersky · Danny Tarlow · Richard Zemel · Ryan Adams · Brendan J Frey -
2012 Poster: Relax and Randomize : From Value to Algorithms »
Sasha Rakhlin · Ohad Shamir · Karthik Sridharan -
2012 Poster: Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins »
Alex Schwing · Tamir Hazan · Marc Pollefeys · Raquel Urtasun -
2012 Oral: Relax and Randomize : From Value to Algorithms »
Sasha Rakhlin · Ohad Shamir · Karthik Sridharan -
2012 Poster: Cardinality Restricted Boltzmann Machines »
Kevin Swersky · Danny Tarlow · Ilya Sutskever · Richard Zemel · Russ Salakhutdinov · Ryan Adams -
2011 Workshop: Computational Trade-offs in Statistical Learning »
Alekh Agarwal · Sasha Rakhlin -
2011 Session: Oral Session 12 »
Sasha Rakhlin -
2011 Poster: Lower Bounds for Passive and Active Learning »
Maxim Raginsky · Sasha Rakhlin -
2011 Poster: Stochastic convex optimization with bandit feedback »
Alekh Agarwal · Dean P Foster · Daniel Hsu · Sham M Kakade · Sasha Rakhlin -
2011 Spotlight: Lower Bounds for Passive and Active Learning »
Maxim Raginsky · Sasha Rakhlin -
2011 Poster: Online Learning: Stochastic, Constrained, and Smoothed Adversaries »
Sasha Rakhlin · Karthik Sridharan · Ambuj Tewari -
2010 Poster: Random Walk Approach to Regret Minimization »
Hariharan Narayanan · Sasha Rakhlin -
2010 Oral: Online Learning: Random Averages, Combinatorial Parameters, and Learnability »
Sasha Rakhlin · Karthik Sridharan · Ambuj Tewari -
2010 Poster: Gaussian sampling by local perturbations »
George Papandreou · Alan Yuille -
2010 Poster: A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction »
Tamir Hazan · Raquel Urtasun -
2010 Poster: Online Learning: Random Averages, Combinatorial Parameters, and Learnability »
Sasha Rakhlin · Karthik Sridharan · Ambuj Tewari -
2010 Poster: Direct Loss Minimization for Structured Prediction »
David A McAllester · Tamir Hazan · Joseph Keshet -
2007 Oral: Adaptive Online Gradient Descent »
Peter Bartlett · Elad Hazan · Sasha Rakhlin -
2007 Poster: Adaptive Online Gradient Descent »
Peter Bartlett · Elad Hazan · Sasha Rakhlin -
2006 Poster: Using Combinatorial Optimization within Max-Product Belief Propagation »
John Duchi · Danny Tarlow · Gal Elidan · Daphne Koller -
2006 Spotlight: Using Combinatorial Optimization within Max-Product Belief Propagation »
John Duchi · Danny Tarlow · Gal Elidan · Daphne Koller -
2006 Poster: Stability of $K$-Means Clustering »
Sasha Rakhlin · Andrea Caponnetto