Skip to yearly menu bar Skip to main content


Workshop

OPT 2022: Optimization for Machine Learning

Courtney Paquette · Sebastian Stich · Quanquan Gu · Cristóbal Guzmán · John Duchi

Room 295 - 296

OPT 2022 will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. OPT 2022 honors this tradition of bringing together people from optimization and from ML in order to promote and generate new interactions between the two communities.

To foster the spirit of innovation and collaboration, a goal of this workshop, OPT 2022 will focus the contributed talks on research in Reliable Optimization Methods for ML. Many optimization algorithms for ML were originally developed with the goal of handling computational constraints (e.g., stochastic gradient based algorithms). Moreover, the analyses of these algorithms followed the classical optimization approach where one measures the performances of algorithms based on (i) the computation cost and (ii) convergence for any input into the algorithm. As engineering capabilities increase and the wide adoption of ML into many real world usages, practitioners of ML are seeking optimization algorithms that go beyond finding the minimizer with the fastest algorithm. They want reliable methods that solve real-world complications that arise. For example, increasingly bad actors are attempting to fool models with deceptive data. This leads to questions such as what algorithms are more robust to adversarial attacks and can one design new algorithms that can thwart these attacks? The latter question motivates a new area of optimization focusing on game-theoretic environments, that is, environments where there are competing forces at play and devising guarantees. Beyond this, a main reason for the success of ML is that optimization algorithms seemingly generate points that learn from training data; that is, we want minimizers of training data to provide meaningful interpretations on new data (generalization) yet we do not understand what features (e.g., loss function, algorithm, depth of the architectures (deep learning), and/or training samples) yield better generalization properties. These new areas of solving practical ML problems and their deep ties to the optimization community warrants a necessary discussion between the two communities. Specifically, we aim to discuss the meanings of generalization as well as the challenges facing real-world applications of ML and the new paradigms for optimizers seeking to solve them.

Plenary Speakers: All invited speakers have agreed to coming in-person to the workshop.

* Niao He (ETH, Zurich, assistant professor)

* Zico Kolter (Carnegie Mellon University, associate professor)

* Lorenzo Rosasco (U Genova/MIT, assistant professor)

* Katya Scheinberg (Cornell, full professor)

* Aaron Sidford (Stanford, assistant professor)

Chat is not available.
Timezone: America/Los_Angeles

Schedule