Timezone: »

Workshop
Bridging Game Theory and Deep Learning
Ioannis Mitliagkas · Gauthier Gidel · Niao He · Reyhane Askari Hemmat · N H · Nika Haghtalab · Simon Lacoste-Julien

Sat Dec 14 08:00 AM -- 06:30 PM (PST) @ West Exhibition Hall A

Advances in generative modeling and adversarial learning gave rise to a recent surge of interest in differentiable two-players games, with much of the attention falling on generative adversarial networks (GANs). Solving these games introduces distinct challenges compared to the standard minimization tasks that the machine learning (ML) community is used to. A symptom of this issue is ML and deep learning (DL) practitioners using optimization tools on game-theoretic problems. Our NeurIPS 2018 workshop, "Smooth games optimization in ML", aimed to rectify this situation, addressing theoretical aspects of games in machine learning, their special dynamics, and typical challenges. For this year, we significantly expand our scope to tackle questions like the design of game formulations for other classes of ML problems, the integration of learning with game theory as well as their important applications. To that end, we have confirmed talks from Éva Tardos, David Balduzzi and Fei Fang. We will also solicit contributed posters and talks in the area.

 Sat 8:15 a.m. - 8:30 a.m. Opening remarks (Short presentation) Sat 8:30 a.m. - 9:10 a.m. Invited talk: Eva Tardos (Cornell) (Invited talk) Eva Tardos Sat 9:10 a.m. - 9:30 a.m. Morning poster Spotlight (Spotlight) Sat 9:30 a.m. - 11:00 a.m. Morning poster session -- coffee break (Poster session) Sat 11:00 a.m. - 11:40 a.m. Invited talk: David Balduzzi (DeepMind (Invited talk) David Balduzzi Sat 11:40 a.m. - 12:05 p.m. Contributed talk: What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? (Contributed talk) » Minimax optimization has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training and multi-agent reinforcement learning. As most of these applications involve continuous nonconvex-nonconcave formulations, a very basic question arises---what is a proper definition of local optima?'' Most previous work answers this question using classical notions of equilibria from simultaneous games, where the min-player and the max-player act simultaneously. In contrast, most applications in machine learning, including GANs and adversarial training, correspond to sequential games, where the order of which player acts first is crucial (since minimax is in general not equal to maximin due to the nonconvex-nonconcave nature of the problems). The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting---local minimax, as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm---gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points. Praneeth Netrapalli Sat 12:05 p.m. - 12:30 p.m. Contributed talk: Characterizing Equilibria in Stackelberg Games (Contributed talk) » This paper investigates the convergence of learning dynamics in Stackelberg games on continuous action spaces, a class of games distinguished by the hierarchical order of play between agents. We establish connections between the Nash and Stackelberg equilibrium concepts and characterize conditions under which attractors of simultaneous gradient descent are Stackelberg equilibria in zero-sum games. Moreover, we show that the only stable attractors of the Stackelberg gradient dynamics are Stackelberg equilibria in zero-sum games. Using this insight, we develop two-timescale learning dynamics that converge to Stackelberg equilibria in zero-sum games and the set of stable attractors in general-sum games. Tanner Fiez Sat 12:30 p.m. - 2:00 p.m. Lunch break (Break) Sat 2:00 p.m. - 2:40 p.m. Invited talk: Fei Fang (CMU) (Invited talk) Fei Fang Sat 2:40 p.m. - 3:05 p.m. Contributed talk: On Solving Local Minimax Optimization: A Follow-the-Ridge Approach (Contributed talk) » Many tasks in modern machine learning can be formulated as finding equilibria in \emph{sequential} games. In particular, two-player zero-sum sequential games, also known as minimax optimization, have received growing interest. It is tempting to apply gradient descent to solve minimax optimization given its popularity in supervised learning. However, we note that naive application of gradient descent fails to find local minimax -- the analogy of local minima in minimax optimization, since the fixed points of gradient dynamics might not be local minimax. In this paper, we propose \emph{Follow-the-Ridge} (FR), an algorithm that locally converges to and only converges to local minimax. We show theoretically that the algorithm addresses the limit cycling problem around fixed points, and is compatible with preconditioning and \emph{positive} momentum. Empirically, FR solves quadratic minimax problems and improves GAN training on simple tasks. Yuanhao Wang Sat 3:05 p.m. - 3:30 p.m. Contributed talk: Exploiting Uncertain Real-Time Information from Deep Learning in Signaling Games for Security and Sustainability (Contributed talk) » Motivated by real-world deployment of drones for conservation, this paper advances the state-of-the-art in security games with signaling. The well-known defender-attacker security games framework can help in planning for such strategic deployments of sensors and human patrollers, and warning signals to ward off adversaries. However, we show that defenders can suffer significant losses when ignoring real-world uncertainties, such as detection uncertainty resulting from imperfect deep learning models, despite carefully planned security game strategies with signaling. In fact, defenders may perform worse than forgoing drones completely in this case. We address this shortcoming by proposing a novel game model that integrates signaling and sensor uncertainty; perhaps surprisingly, we show that defenders can still perform well via a signaling strategy that exploits the uncertain real-time information primarily from deep learning models. For example, even in the presence of uncertainty, the defender still has an informational advantage in knowing that she has or has not actually detected the attacker; and she can design a signaling scheme to `mislead'' the attacker who is uncertain as to whether he has been detected. We provide a novel algorithm, scale-up techniques, and experimental results from simulation based on our ongoing deployment of a conservation drone system in South Africa. Elizabeth Bondi Sat 3:30 p.m. - 4:00 p.m. Coffee break (Break) Sat 4:00 p.m. - 4:40 p.m. Invited talk: Aryan Mokhtari (UT Austin) (Invited talk) Aryan Mokhtari Sat 4:40 p.m. - 5:00 p.m. Afternoon poster spotlight (Poster spotlight) Sat 5:00 p.m. - 5:30 p.m. Discussion panel (Panel) Sat 5:30 p.m. - 6:30 p.m. Concluding remarks -- afternoon poster session (Poster session)

#### Author Information

##### Gauthier Gidel (Mila)

I am a Ph.D student supervised by Simon Lacoste-Julien, I graduated from ENS Ulm and Université Paris-Saclay. I was a visiting PhD student at Sierra. I also worked for 6 months as a freelance Data Scientist for Monsieur Drive (Acquired by Criteo) and I recently co-founded a startup called Krypto. I'm currently pursuing my PhD at Mila. My work focuses on optimization applied to machine learning. More details can be found in my resume. My research is to develop new optimization algorithms and understand the role of optimization in the learning procedure, in short, learn faster and better. I identify to the field of machine learning (NIPS, ICML, AISTATS and ICLR) and optimization (SIAM OP)

##### Simon Lacoste-Julien (Mila, Université de Montréal & SAIL Montreal)

Simon Lacoste-Julien is an associate professor at Mila and DIRO from Université de Montréal, and Canada CIFAR AI Chair holder. He also heads part time the SAIT AI Lab Montreal from Samsung. His research interests are machine learning and applied math, with applications in related fields like computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016 to answer the call from Yoshua Bengio in growing the Montreal AI ecosystem.