Timezone: »

 
Workshop
Smooth Games Optimization and Machine Learning
Simon Lacoste-Julien · Ioannis Mitliagkas · Gauthier Gidel · Vasilis Syrgkanis · Eva Tardos · Leon Bottou · Sebastian Nowozin

Fri Dec 07 05:00 AM -- 03:30 PM (PST) @ Room 512 ABEF
Event URL: https://sgo-workshop.github.io/ »

Overview

Advances in generative modeling and adversarial learning gave rise to a recent surge of interest in smooth two-players games, specifically in the context of learning generative adversarial networks (GANs). Solving these games raise intrinsically different challenges than the minimization tasks the machine learning community is used to. The goal of this workshop is to bring together the several communities interested in such smooth games, in order to present what is known on the topic and identify current open questions, such as how to handle the non-convexity appearing in GANs.

Background and objectives

A number of problems and applications in machine learning are formulated as games. A special class of games, smooth games, have come into the spotlight recently with the advent of GANs. In a two-players smooth game, each player attempts to minimize their differentiable cost function which depends also on the action of the other player. The dynamics of such games are distinct from the better understood dynamics of optimization problems. For example, the Jacobian of gradient descent on a smooth two-player game, can be non-symmetric and have complex eigenvalues. Recent work by ML researchers has identified these dynamics as a key challenge for efficiently solving similar problems.

A major hurdle for relevant research in the ML community is the lack of interaction with the mathematical programming and game theory communities where similar problems have been tackled in the past, yielding useful tools. While ML researchers are quite familiar with the convex optimization toolbox from mathematical programming, they are less familiar with the tools for solving games. For example, the extragradient algorithm to solve variational inequalities has been known in the mathematical programming literature for decades, however the ML community has until recently mainly appealed to gradient descent to optimize adversarial objectives.

The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We also encourage, and will devote time during the workshop, on work that identifies and discusses open, forward-looking problems of interest to the NIPS community.

Examples of topics of interest to the workshop are as follow:

  • Other examples of smooth games in machine learning (e.g. actor-critic models in RL).
  • Standard or novel algorithms to solve smooth games.
  • Empirical test of algorithms on GAN applications.
  • Existence and unicity results of equilibria in smooth games.
  • Can approximate equilibria have better properties than the exact ones ? [Arora 2017, Lipton and Young 1994].
  • Variational inequality algorithms [Harker and Pang 1990, Gidel et al. 2018].
  • Handling stochasticity [Hazan et al. 2017] or non-convexity [Grnarova et al. 2018] in smooth games.
  • Related topics from mathematical programming (e.g. bilevel optimization) [Pfau and Vinyals 2016].
Fri 5:30 a.m. - 5:50 a.m. [iCal]
Opening remarks (Talk)
Simon Lacoste-Julien, Gauthier Gidel
Fri 5:50 a.m. - 6:30 a.m. [iCal]

Generative Adversarial Networks (aka GANs) are a recently proposed approach for learning samplers of high-dimensional distributions with intricate structure, such as distributions over natural images, given samples from these distributions. They are trained by setting up a two-player zero-sum game between two neural networks, which learn statistics of a target distribution by adapting their strategies in the game using gradient descent. Despite their intriguing performance in practice, GANs pose great challenges to both Optimization and Statistics. Their training suffers from oscillations, and they are difficult to scale to high-dimensional settings. We study how game-theoretic and statistical techniques can be brought to bare on these important challenges. We use Game Theory towards improving GAN training, and Statistics towards scaling up the dimensionality of the generated distributions.

Constantinos Daskalakis
Fri 6:30 a.m. - 7:00 a.m. [iCal]

1) Provable Non-Convex Min-Max Optimization; Mingrui Liu, Rafique Hassan, Qihang Lin and Tianbao Yang.

2) Solving differential games by methods for finite-dimensional saddle-point problems; Pavel Dvurechensky, Yurii Nesterov and Vladimir Spokoiny.

3) Generalized Mirror Prox Algorithm for Variational Inequalities; Pavel Dvurechensky, Alexander Gasnikov, Fedor Stonyakin and Alexander Titov.

4) On the convergence of stochastic forward-backward-forward algorithms with variance reduction in pseudo-monotone variational inequalities; Mathias Staudigl, Radu Ioan Bot, Phan Tu Vuong and Panayotis Mertikopoulos.

5) A Variational Inequality Perspective on Generative Adversarial Networks; Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent and Simon Lacoste-Julien.

Tianbao Yang, Pavel Dvurechenskii, Panayotis Mertikopoulos, Hugo Berard
Fri 7:00 a.m. - 7:30 a.m. [iCal]
Poster session
Fri 7:30 a.m. - 8:00 a.m. [iCal]
Morning coffee break (Break)
Fri 8:00 a.m. - 8:40 a.m. [iCal]

This talk will discuss a wide spectrum of recent advances in machine learning using smooth games, beyond the phenomenal GANs. Such showcases include reinforcement learning, robust and adversarial machine learning, approximate Bayesian computation, maximum likelihood estimation in exponential family, and etc. We show that all of these classical machine learning tasks can be reduced to solving (non) convex-concave min-max optimization problems. Hence, it is of paramount importance to developing a good theoretical understanding and principled algorithms for min-max optimization. We will review some of the theory and algorithms for smooth games and variational inequalities in the convex regime and shed some light on their counterparts in the non-convex regime.

Niao He
Fri 8:40 a.m. - 9:00 a.m. [iCal]

We reconsider the training objective of Generative Adversarial Networks (GANs) from the mixed Nash Equilibria (NE) perspective. Inspired by the classical prox methods, we develop a novel algorithmic framework for GANs via an infinite-dimensional two-player game and prove rigorous convergence rates to the mixed NE. We then propose a principled procedure to reduce our novel prox methods to simple sampling routines, leading to practically efficient algorithms. Finally, we provide experimental evidence that our approach outperforms methods that seek pure strategy equilibria, such as SGD, Adam, and RMSProp, both in speed and quality.

Volkan Cevher
Fri 9:00 a.m. - 9:20 a.m. [iCal]

Games with continuous strategy sets arise in several machine learning problems (e.g. adversarial learning). For such games, simple no-regret learning algorithms exist in several cases and ensure convergence to coarse correlated equilibria (CCE). The efficiency of such equilibria with respect to a social function, however, is not well understood. In this paper, we define the class of valid utility games with continuous strategies and provide efficiency bounds for their CCEs. Our bounds rely on the social function satisfying recently introduced notions of submodularity over continuous domains. We further refine our bounds based on the curvature of the social function. Furthermore, we extend our efficiency bounds to a class of non-submodular functions that satisfy approximate submodularity properties. Finally, we show that valid utility games with continuous strategies can be designed to maximize monotone DR-submodular functions subject to disjoint constraints with approximation guarantees. The approximation guarantees we derive are based on the efficiency of the equilibria of such games and can improve the existing ones in the literature. We illustrate and validate our results on a budget allocation game and a sensor coverage problem.

Pier Giuseppe Sessa
Fri 9:20 a.m. - 11:00 a.m. [iCal]
Lunch break (Break)
Fri 11:00 a.m. - 11:40 a.m. [iCal]

A very popular trick for solving certain types of optimization problems is this: write your objective as the solution of a two-player zero-sum game, endow both players with an appropriate learning algorithm, watch how the opponents compete, and extract an (approximate) solution from the actions/decisions taken by the players throughout the process. This approach is very generic and provides a natural template to produce new and interesting algorithms. I will describe this framework and show how it applies in several scenarios, and describe recent work that draws a connection to the Frank-Wolfe algorithm and Nesterov's Accelerated Gradient Descent.

Jacob D Abernethy
Fri 11:40 a.m. - 12:00 p.m. [iCal]

Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics are more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. Next, we show empirically that alternating gradient updates with a negative momentum term achieves convergence on the notoriously difficult to train saturating GANs.

Reyhane Askari Hemmat
Fri 12:00 p.m. - 12:30 p.m. [iCal]
Afternoon coffee break (Break)
Fri 12:30 p.m. - 12:50 p.m. [iCal]

We derive a new framework for regret minimization on sequential decision problems and extensive-form games with general compact convex sets at each decision point and general convex losses, as opposed to prior work which has been for simplex decision points and linear losses. We call our framework laminar regret decomposition. It generalizes the CFR algorithm to this more general setting. Furthermore, our framework enables a new proof of CFR even in the known setting, which is derived from a perspective of decomposing polytope regret, thereby leading to an arguably simpler interpretation of the algorithm. Our generalization to convex compact sets and convex losses allows us to develop new algorithms for several problems: regularized sequential decision making, regularized Nash equilibria in extensive-form games, and computing approximate extensive-form perfect equilibria. Our generalization also leads to the first regret-minimization algorithm for computing reduced-normal-form quantal response equilibria based on minimizing local regrets.

Gabriele Farina
Fri 12:50 p.m. - 1:30 p.m. [iCal]

Generative Adversarial Networks (GANs) have become one of the most powerful paradigms in learning real-world distributions. Despite this success, their minimax nature makes them fundamentally different to more classical generative models thus raising novel challenges; most notably in terms of training and evaluation. Indeed, finding a saddle-point is in general a harder task than converging to an extremum. We view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building upon ideas from online learning and game theory, we propose (i) a novel training method with provable convergence to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is an arbitrary network and (ii) a natural metric for detecting non-convergence, namely the duality gap.

Paulina Grnarova
Fri 1:30 p.m. - 2:00 p.m. [iCal]

1) Model Compression with Generative Adversarial Networks; Ruishan Liu, Nicolo Fusi and Lester Mackey.

2) An Adversarial Labeling Game for Learning from Weak Supervision; Chidubem Arachie and Bert Huang.

3) Multi-objective training of Generative Adversarial Networks with multiple discriminators; Isabela Albuquerque, Joao Monteiro, Thang Doan, Breandan Considine, Tiago Falk and Ioannis Mitliagkas.

4) A GAN framework for Instance Segmentation using the Mutex Watershed Algorithm; Mandikal Vikram and Steffen Wolf.

Nicolo Fusi, Chidubem Arachie, Joao Monteiro, Steffen Wolf
Fri 2:00 p.m. - 2:30 p.m. [iCal]

Panel with invited speakers.

Fri 2:30 p.m. - 2:40 p.m. [iCal]
Concluding remarks (Talk)
Fri 2:40 p.m. - 3:30 p.m. [iCal]
Poster session afternoon (Poster session)

Author Information

Simon Lacoste-Julien (Mila, Université de Montréal)

Simon Lacoste-Julien is an associate professor at Mila and DIRO from Université de Montréal, and Canada CIFAR AI Chair holder. He also heads part time the SAIT AI Lab Montreal from Samsung. His research interests are machine learning and applied math, with applications in related fields like computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016 to answer the call from Yoshua Bengio in growing the Montreal AI ecosystem.

Ioannis Mitliagkas (University of Montreal)
Gauthier Gidel (MILA)
Vasilis Syrgkanis (Microsoft Research)
Eva Tardos (Cornell)

Eva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, was Computer Science department chair 2006-2010. She received her BA and PhD from Eotvos University in Budapest. Tardos’s research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms, approximation algorithms, and quantifying the efficiency of selfish routing. Her current interest include the effect of learning behavior in games. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, and is an external member of the Hungarian Academy of Sciences. She is the recipient of a number of fellowships and awards including the Packard Fellowship, the Goedel Prize, Dantzig Prize, Fulkerson Prize, ETACS prize, and the IEEE Technical Achievement Award. She is editor editor-in-Chief of the Journal of the ACM, and was editor in the past of several other journals including the SIAM Journal of Computing, and Combinatorica, served as problem committee member for many conferences, and was program committee chair for SODA’96, FOCS’05, and EC’13.

Leon Bottou (Facebook AI Research)

Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de Paris-Sud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .

Sebastian Nowozin (Microsoft Research Cambridge)

More from the Same Authors