NIPS 2012
Skip to yearly menu bar Skip to main content


Workshop

Perturbations, Optimization, and Statistics

Tamir Hazan · George Papandreou · Danny Tarlow

Glenbrook + Emerald Bay, Harrah’s Special Events Center 2nd Floor

In nearly all machine learning tasks, we expect there to be randomness, or noise, in the data we observe and in the relationships encoded by the model. Usually, this noise is considered undesirable, and we would eliminate it if possible. However, there is an emerging body of work on perturbation methods, showing the benefits of explicitly adding noise into the modeling, learning, and inference pipelines. This workshop will bring together the growing community of researchers interested in different aspects of this area, and will broaden our understanding of why and how perturbation methods can be useful.

More generally, perturbation methods usually provide efficient and principled ways to reason about the neighborhood of possible outcomes when trying to make the best decision. For example, some might want to arrive at the best outcome that is robust to small changes in model parameters. Others might want to find the best choice while compensating for their lack of knowledge by averaging over the different outcomes. Recently, several works influenced by diverse fields of research such as statistics, optimization, machine learning, and theoretical computer science, use perturbation methods in similar ways. The goal of this workshop is to explore different techniques in perturbation methods and their consequences on computation, statistics and optimization. We shall specifically be interested in understanding the following issues:

* Statistical Modeling: What types of statistical models can be defined for structured prediction? How can random perturbations be used to relate computation and statistics?
* Efficient Sampling: What are the computational properties that allow efficient and unbiased sampling? How do perturbations control the geometry of such models and how can we construct sampling methods for these families?

* Approximate Inference: What are the computational and statistical requirements from inference? How can the maximum of random perturbations be used to measure the uncertainty of a system?
* Learning: How can we probabilistically learn model parameters from training data using random perturbations? What are the connections with max-margin and conditional random fields techniques?
* Theory: How does the maximum of a random process relate to its complexity? What are the statistical and computational properties it describes in Gaussian free fields over graphs?

* Pseudo-sampling: How do dynamical systems encode randomness? To what extent do perturbations direct us to the “pseudo-randomness” of its underlying dynamics?

* Robust classification: How can classifiers be learned in a robust way, and how can support vector machines be realized in this context? What are the relations between adversarial perturbations and regularizations and what are their extensions to structured predictions?
* Robust reconstructions: How can information be robustly encoded? In what ways can learning be improved by perturbing the input measurements?
* Adversarial Uncertainty: How can structured prediction be performed in zero-sum game setting? What are the computational qualities of such solutions, and do Nash-equilibria exists in these cases?


Target Audience: The workshop should appeal to NIPS attendees interested in both theoretical aspects such as Bayesian modeling, Monte Carlo sampling, optimization, inference, and learning, as well as practical applications in computer vision and language modeling.

Live content is unavailable. Log in and register to view live content