Timezone: »
In nearly all machine learning tasks, we expect there to be randomness, or noise, in the data we observe and in the relationships encoded by the model. Usually, this noise is considered undesirable, and we would eliminate it if possible. However, there is an emerging body of work on perturbation methods, showing the benefits of explicitly adding noise into the modeling, learning, and inference pipelines. This workshop will bring together the growing community of researchers interested in different aspects of this area, and will broaden our understanding of why and how perturbation methods can be useful.
More generally, perturbation methods usually provide efficient and principled ways to reason about the neighborhood of possible outcomes when trying to make the best decision. For example, some might want to arrive at the best outcome that is robust to small changes in model parameters. Others might want to find the best choice while compensating for their lack of knowledge by averaging over the different outcomes. Recently, several works influenced by diverse fields of research such as statistics, optimization, machine learning, and theoretical computer science, use perturbation methods in similar ways. The goal of this workshop is to explore different techniques in perturbation methods and their consequences on computation, statistics and optimization. We shall specifically be interested in understanding the following issues:
* Statistical Modeling: What types of statistical models can be defined for structured prediction? How can random perturbations be used to relate computation and statistics?
* Efficient Sampling: What are the computational properties that allow efficient and unbiased sampling? How do perturbations control the geometry of such models and how can we construct sampling methods for these families?
* Approximate Inference: What are the computational and statistical requirements from inference? How can the maximum of random perturbations be used to measure the uncertainty of a system?
* Learning: How can we probabilistically learn model parameters from training data using random perturbations? What are the connections with maxmargin and conditional random fields techniques?
* Theory: How does the maximum of a random process relate to its complexity? What are the statistical and computational properties it describes in Gaussian free fields over graphs?
* Pseudosampling: How do dynamical systems encode randomness? To what extent do perturbations direct us to the “pseudorandomness” of its underlying dynamics?
* Robust classification: How can classifiers be learned in a robust way, and how can support vector machines be realized in this context? What are the relations between adversarial perturbations and regularizations and what are their extensions to structured predictions?
* Robust reconstructions: How can information be robustly encoded? In what ways can learning be improved by perturbing the input measurements?
* Adversarial Uncertainty: How can structured prediction be performed in zerosum game setting? What are the computational qualities of such solutions, and do Nashequilibria exists in these cases?
Target Audience: The workshop should appeal to NIPS attendees interested in both theoretical aspects such as Bayesian modeling, Monte Carlo sampling, optimization, inference, and learning, as well as practical applications in computer vision and language modeling.
Author Information
Tamir Hazan (Technion)
George Papandreou (Toyota Technological Institute at Chicago)
Danny Tarlow (Google Brain)
More from the Same Authors

2021 Spotlight: PLUR: A Unifying, GraphBased View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · PierreAntoine Manzagol · Daniel Tarlow · Subhodeep Moitra 
2021 Spotlight: Learning Generalized Gumbelmax Causal Mechanisms »
Guy Lorberbom · Daniel D. Johnson · Chris Maddison · Daniel Tarlow · Tamir Hazan 
2021 Workshop: Advances in Programming Languages and Neurosymbolic Systems (AIPLANS) »
Breandan Considine · Disha Shrivastava · David YuTung Hui · ChinWei Huang · Shawn Tan · Xujie Si · Prakash Panangaden · Guy Van den Broeck · Daniel Tarlow 
2021 Poster: Structured Denoising Diffusion Models in Discrete StateSpaces »
Jacob Austin · Daniel D. Johnson · Jonathan Ho · Daniel Tarlow · Rianne van den Berg 
2021 Poster: Learning to Combine PerExample Solutions for Neural Program Synthesis »
Disha Shrivastava · Hugo Larochelle · Daniel Tarlow 
2021 Poster: PLUR: A Unifying, GraphBased View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · PierreAntoine Manzagol · Daniel Tarlow · Subhodeep Moitra 
2021 Poster: Learning Generalized Gumbelmax Causal Mechanisms »
Guy Lorberbom · Daniel D. Johnson · Chris Maddison · Daniel Tarlow · Tamir Hazan 
2020 Poster: Removing Bias in Multimodal Classifiers: Regularization by Maximizing Functional Entropies »
Itai Gat · Idan Schwartz · Alex Schwing · Tamir Hazan 
2020 Poster: Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces »
Guy Lorberbom · Chris Maddison · Nicolas Heess · Tamir Hazan · Danny Tarlow 
2019 Poster: Direct Optimization through $\arg \max$ for Discrete Variational AutoEncoder »
Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan 
2017 Poster: HighOrder Attention Models for Visual Question Answering »
Idan Schwartz · Alex Schwing · Tamir Hazan 
2016 Poster: Constraints Based Convex Belief Propagation »
Yaniv Tenzer · Alex Schwing · Kevin Gimpel · Tamir Hazan 
2014 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Danny Tarlow 
2014 Poster: JustInTime Learning for Fast and Flexible Inference »
S. M. Ali Eslami · Danny Tarlow · Pushmeet Kohli · John Winn 
2014 Poster: A* Sampling »
Chris Maddison · Danny Tarlow · Tom Minka 
2014 Oral: A* Sampling »
Chris Maddison · Danny Tarlow · Tom Minka 
2013 Workshop: Perturbations, Optimization, and Statistics »
Tamir Hazan · George Papandreou · Sasha Rakhlin · Danny Tarlow 
2013 Poster: Learning Efficient Random Maximum APosteriori Predictors with NonDecomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola 
2013 Poster: Learning to Pass Expectation Propagation Messages »
Nicolas Heess · Danny Tarlow · John Winn 
2013 Poster: On Sampling from the Gibbs Distribution with Random Maximum APosteriori Perturbations »
Tamir Hazan · Subhransu Maji · Tommi Jaakkola 
2012 Poster: Bayesian nChoosek Models for Classification and Ranking »
Kevin Swersky · Danny Tarlow · Richard Zemel · Ryan Adams · Brendan J Frey 
2012 Poster: Globally Convergent Dual MAP LP Relaxation Solvers using FenchelYoung Margins »
Alex Schwing · Tamir Hazan · Marc Pollefeys · Raquel Urtasun 
2012 Poster: Cardinality Restricted Boltzmann Machines »
Kevin Swersky · Danny Tarlow · Ilya Sutskever · Richard Zemel · Russ Salakhutdinov · Ryan Adams 
2010 Poster: Gaussian sampling by local perturbations »
George Papandreou · Alan Yuille 
2010 Poster: A PrimalDual MessagePassing Algorithm for Approximated Large Scale Structured Prediction »
Tamir Hazan · Raquel Urtasun 
2010 Poster: Direct Loss Minimization for Structured Prediction »
David A McAllester · Tamir Hazan · Joseph Keshet 
2006 Poster: Using Combinatorial Optimization within MaxProduct Belief Propagation »
John Duchi · Danny Tarlow · Gal Elidan · Daphne Koller 
2006 Spotlight: Using Combinatorial Optimization within MaxProduct Belief Propagation »
John Duchi · Danny Tarlow · Gal Elidan · Daphne Koller