Timezone: »
Overview
Advances in generative modeling and adversarial learning gave rise to a recent surge of interest in smooth two-players games, specifically in the context of learning generative adversarial networks (GANs). Solving these games raise intrinsically different challenges than the minimization tasks the machine learning community is used to. The goal of this workshop is to bring together the several communities interested in such smooth games, in order to present what is known on the topic and identify current open questions, such as how to handle the non-convexity appearing in GANs.
Background and objectives
A number of problems and applications in machine learning are formulated as games. A special class of games, smooth games, have come into the spotlight recently with the advent of GANs. In a two-players smooth game, each player attempts to minimize their differentiable cost function which depends also on the action of the other player. The dynamics of such games are distinct from the better understood dynamics of optimization problems. For example, the Jacobian of gradient descent on a smooth two-player game, can be non-symmetric and have complex eigenvalues. Recent work by ML researchers has identified these dynamics as a key challenge for efficiently solving similar problems.
A major hurdle for relevant research in the ML community is the lack of interaction with the mathematical programming and game theory communities where similar problems have been tackled in the past, yielding useful tools. While ML researchers are quite familiar with the convex optimization toolbox from mathematical programming, they are less familiar with the tools for solving games. For example, the extragradient algorithm to solve variational inequalities has been known in the mathematical programming literature for decades, however the ML community has until recently mainly appealed to gradient descent to optimize adversarial objectives.
The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We also encourage, and will devote time during the workshop, on work that identifies and discusses open, forward-looking problems of interest to the NIPS community.
Examples of topics of interest to the workshop are as follow:
- Other examples of smooth games in machine learning (e.g. actor-critic models in RL).
- Standard or novel algorithms to solve smooth games.
- Empirical test of algorithms on GAN applications.
- Existence and unicity results of equilibria in smooth games.
- Can approximate equilibria have better properties than the exact ones ? [Arora 2017, Lipton and Young 1994].
- Variational inequality algorithms [Harker and Pang 1990, Gidel et al. 2018].
- Handling stochasticity [Hazan et al. 2017] or non-convexity [Grnarova et al. 2018] in smooth games.
- Related topics from mathematical programming (e.g. bilevel optimization) [Pfau and Vinyals 2016].
Fri 5:30 a.m. - 5:50 a.m.
|
Opening remarks
(
Talk
)
|
Simon Lacoste-Julien · Gauthier Gidel 🔗 |
Fri 5:50 a.m. - 6:30 a.m.
|
Improving Generative Adversarial Networks using Game Theory and Statistics
(
Invited Talk
)
Generative Adversarial Networks (aka GANs) are a recently proposed approach for learning samplers of high-dimensional distributions with intricate structure, such as distributions over natural images, given samples from these distributions. They are trained by setting up a two-player zero-sum game between two neural networks, which learn statistics of a target distribution by adapting their strategies in the game using gradient descent. Despite their intriguing performance in practice, GANs pose great challenges to both Optimization and Statistics. Their training suffers from oscillations, and they are difficult to scale to high-dimensional settings. We study how game-theoretic and statistical techniques can be brought to bare on these important challenges. We use Game Theory towards improving GAN training, and Statistics towards scaling up the dimensionality of the generated distributions. |
Constantinos Daskalakis 🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Poster spotlight
(
Spotlights
)
1) Provable Non-Convex Min-Max Optimization; Mingrui Liu, Rafique Hassan, Qihang Lin and Tianbao Yang. 2) Solving differential games by methods for finite-dimensional saddle-point problems; Pavel Dvurechensky, Yurii Nesterov and Vladimir Spokoiny. 3) Generalized Mirror Prox Algorithm for Variational Inequalities; Pavel Dvurechensky, Alexander Gasnikov, Fedor Stonyakin and Alexander Titov. 4) On the convergence of stochastic forward-backward-forward algorithms with variance reduction in pseudo-monotone variational inequalities; Mathias Staudigl, Radu Ioan Bot, Phan Tu Vuong and Panayotis Mertikopoulos. 5) A Variational Inequality Perspective on Generative Adversarial Networks; Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent and Simon Lacoste-Julien. |
Tianbao Yang · Pavel Dvurechenskii · Panayotis Mertikopoulos · Hugo Berard 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Poster session
|
🔗 |
Fri 7:30 a.m. - 8:00 a.m.
|
Morning coffee break
|
🔗 |
Fri 8:00 a.m. - 8:40 a.m.
|
Smooth Games in Machine Learning Beyond GANs
(
Invited Talk
)
This talk will discuss a wide spectrum of recent advances in machine learning using smooth games, beyond the phenomenal GANs. Such showcases include reinforcement learning, robust and adversarial machine learning, approximate Bayesian computation, maximum likelihood estimation in exponential family, and etc. We show that all of these classical machine learning tasks can be reduced to solving (non) convex-concave min-max optimization problems. Hence, it is of paramount importance to developing a good theoretical understanding and principled algorithms for min-max optimization. We will review some of the theory and algorithms for smooth games and variational inequalities in the convex regime and shed some light on their counterparts in the non-convex regime. |
Niao He 🔗 |
Fri 8:40 a.m. - 9:00 a.m.
|
Finding Mixed Nash Equilibria of Generative Adversarial Networks
(
Contributed Talk
)
link »
We reconsider the training objective of Generative Adversarial Networks (GANs) from the mixed Nash Equilibria (NE) perspective. Inspired by the classical prox methods, we develop a novel algorithmic framework for GANs via an infinite-dimensional two-player game and prove rigorous convergence rates to the mixed NE. We then propose a principled procedure to reduce our novel prox methods to simple sampling routines, leading to practically efficient algorithms. Finally, we provide experimental evidence that our approach outperforms methods that seek pure strategy equilibria, such as SGD, Adam, and RMSProp, both in speed and quality. |
Volkan Cevher 🔗 |
Fri 9:00 a.m. - 9:20 a.m.
|
Bounding Inefficiency of Equilibria in Continuous Actions Games using Submodularity and Curvature
(
Contributed Talk
)
link »
Games with continuous strategy sets arise in several machine learning problems (e.g. adversarial learning). For such games, simple no-regret learning algorithms exist in several cases and ensure convergence to coarse correlated equilibria (CCE). The efficiency of such equilibria with respect to a social function, however, is not well understood. In this paper, we define the class of valid utility games with continuous strategies and provide efficiency bounds for their CCEs. Our bounds rely on the social function satisfying recently introduced notions of submodularity over continuous domains. We further refine our bounds based on the curvature of the social function. Furthermore, we extend our efficiency bounds to a class of non-submodular functions that satisfy approximate submodularity properties. Finally, we show that valid utility games with continuous strategies can be designed to maximize monotone DR-submodular functions subject to disjoint constraints with approximation guarantees. The approximation guarantees we derive are based on the efficiency of the equilibria of such games and can improve the existing ones in the literature. We illustrate and validate our results on a budget allocation game and a sensor coverage problem. |
Pier Giuseppe Sessa 🔗 |
Fri 9:20 a.m. - 11:00 a.m.
|
Lunch break
|
🔗 |
Fri 11:00 a.m. - 11:40 a.m.
|
Building Algorithms by Playing Games
(
Invited Talk
)
A very popular trick for solving certain types of optimization problems is this: write your objective as the solution of a two-player zero-sum game, endow both players with an appropriate learning algorithm, watch how the opponents compete, and extract an (approximate) solution from the actions/decisions taken by the players throughout the process. This approach is very generic and provides a natural template to produce new and interesting algorithms. I will describe this framework and show how it applies in several scenarios, and describe recent work that draws a connection to the Frank-Wolfe algorithm and Nesterov's Accelerated Gradient Descent. |
Jacob D Abernethy 🔗 |
Fri 11:40 a.m. - 12:00 p.m.
|
Negative Momentum for Improved Game Dynamics
(
Contributed Talk
)
link »
Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics are more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. Next, we show empirically that alternating gradient updates with a negative momentum term achieves convergence on the notoriously difficult to train saturating GANs. |
Reyhane Askari Hemmat 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Afternoon coffee break
|
🔗 |
Fri 12:30 p.m. - 12:50 p.m.
|
Regret Decomposition in Sequential Games with Convex Action Spaces and Losses
(
Contributed Talk
)
link »
We derive a new framework for regret minimization on sequential decision problems and extensive-form games with general compact convex sets at each decision point and general convex losses, as opposed to prior work which has been for simplex decision points and linear losses. We call our framework laminar regret decomposition. It generalizes the CFR algorithm to this more general setting. Furthermore, our framework enables a new proof of CFR even in the known setting, which is derived from a perspective of decomposing polytope regret, thereby leading to an arguably simpler interpretation of the algorithm. Our generalization to convex compact sets and convex losses allows us to develop new algorithms for several problems: regularized sequential decision making, regularized Nash equilibria in extensive-form games, and computing approximate extensive-form perfect equilibria. Our generalization also leads to the first regret-minimization algorithm for computing reduced-normal-form quantal response equilibria based on minimizing local regrets. |
Gabriele Farina 🔗 |
Fri 12:50 p.m. - 1:30 p.m.
|
An interpretation of GANs via online learning and game theory
(
Invited Talk
)
Generative Adversarial Networks (GANs) have become one of the most powerful paradigms in learning real-world distributions. Despite this success, their minimax nature makes them fundamentally different to more classical generative models thus raising novel challenges; most notably in terms of training and evaluation. Indeed, finding a saddle-point is in general a harder task than converging to an extremum. We view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building upon ideas from online learning and game theory, we propose (i) a novel training method with provable convergence to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is an arbitrary network and (ii) a natural metric for detecting non-convergence, namely the duality gap. |
Paulina Grnarova 🔗 |
Fri 1:30 p.m. - 2:00 p.m.
|
Poster spotlight #2
(
Spotlights
)
1) Model Compression with Generative Adversarial Networks; Ruishan Liu, Nicolo Fusi and Lester Mackey. 2) An Adversarial Labeling Game for Learning from Weak Supervision; Chidubem Arachie and Bert Huang. 3) Multi-objective training of Generative Adversarial Networks with multiple discriminators; Isabela Albuquerque, Joao Monteiro, Thang Doan, Breandan Considine, Tiago Falk and Ioannis Mitliagkas. 4) A GAN framework for Instance Segmentation using the Mutex Watershed Algorithm; Mandikal Vikram and Steffen Wolf. |
Nicolo Fusi · Chidubem Arachie · Joao Monteiro · Steffen Wolf 🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
Discussion panel
Panel with invited speakers. |
🔗 |
Fri 2:30 p.m. - 2:40 p.m.
|
Concluding remarks
(
Talk
)
|
🔗 |
Fri 2:40 p.m. - 3:30 p.m.
|
Poster session afternoon
(
Poster session
)
|
🔗 |
Author Information
Simon Lacoste-Julien (Mila, Université de Montréal)
Simon Lacoste-Julien is an associate professor at Mila and DIRO from Université de Montréal, and Canada CIFAR AI Chair holder. He also heads part time the SAIT AI Lab Montreal from Samsung. His research interests are machine learning and applied math, with applications in related fields like computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016 to answer the call from Yoshua Bengio in growing the Montreal AI ecosystem.
Ioannis Mitliagkas (University of Montreal)
Gauthier Gidel (MILA)
Vasilis Syrgkanis (Microsoft Research)
Eva Tardos (Cornell)
Eva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, was Computer Science department chair 2006-2010. She received her BA and PhD from Eotvos University in Budapest. Tardos’s research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms, approximation algorithms, and quantifying the efficiency of selfish routing. Her current interest include the effect of learning behavior in games. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, and is an external member of the Hungarian Academy of Sciences. She is the recipient of a number of fellowships and awards including the Packard Fellowship, the Goedel Prize, Dantzig Prize, Fulkerson Prize, ETACS prize, and the IEEE Technical Achievement Award. She is editor editor-in-Chief of the Journal of the ACM, and was editor in the past of several other journals including the SIAM Journal of Computing, and Combinatorica, served as problem committee member for many conferences, and was program committee chair for SODA’96, FOCS’05, and EC’13.
Leon Bottou (Facebook AI Research)
Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de Paris-Sud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .
Sebastian Nowozin (Microsoft Research Cambridge)
More from the Same Authors
-
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : Double/Debiased Machine Learning for Dynamic Treatment Effects via $g$-Estimation »
Greg Lewis · Vasilis Syrgkanis -
2021 : Estimating the Long-Term Effects of Novel Treatments »
Keith Battocchi · Maggie Hei · Greg Lewis · Miruna Oprescu · Vasilis Syrgkanis -
2021 : Poster: Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 : Pre-train, fine-tune, interpolate: a three-stage strategy for domain generalization »
Alexandre Rame · Jianyu Zhang · Leon Bottou · David Lopez-Paz -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 Poster: Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints »
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi · Yoshua Bengio · Simon Lacoste-Julien -
2022 Poster: The Effects of Regularization and Data Augmentation are Class Dependent »
Randall Balestriero · Leon Bottou · Yann LeCun -
2022 Poster: Data-Efficient Structured Pruning via Submodular Optimization »
Marwa El Halabi · Suraj Srinivas · Simon Lacoste-Julien -
2022 Poster: Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution »
Antonio Orvieto · Simon Lacoste-Julien · Nicolas Loizou -
2021 : Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 Poster: Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity »
Nicolas Loizou · Hugo Berard · Gauthier Gidel · Ioannis Mitliagkas · Simon Lacoste-Julien -
2021 Poster: Double/Debiased Machine Learning for Dynamic Treatment Effects »
Greg Lewis · Vasilis Syrgkanis -
2021 Poster: Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection »
Morgane Austern · Vasilis Syrgkanis -
2021 Poster: Estimating the Long-Term Effects of Novel Treatments »
Keith Battocchi · Eleanor Dillon · Maggie Hei · Greg Lewis · Miruna Oprescu · Vasilis Syrgkanis -
2020 Poster: Adversarial Example Games »
Joey Bose · Gauthier Gidel · Hugo Berard · Andre Cianflone · Pascal Vincent · Simon Lacoste-Julien · Will Hamilton -
2020 Poster: Differentiable Causal Discovery from Interventional Data »
Philippe Brouillard · Sébastien Lachapelle · Alexandre Lacoste · Simon Lacoste-Julien · Alexandre Drouin -
2020 Poster: Minimax Estimation of Conditional Moment Models »
Nishanth Dikkala · Greg Lewis · Lester Mackey · Vasilis Syrgkanis -
2020 Spotlight: Differentiable Causal Discovery from Interventional Data »
Philippe Brouillard · Sébastien Lachapelle · Alexandre Lacoste · Simon Lacoste-Julien · Alexandre Drouin -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 : Invited talk: Eva Tardos (Cornell) »
Eva Tardos -
2019 Workshop: Bridging Game Theory and Deep Learning »
Ioannis Mitliagkas · Gauthier Gidel · Niao He · Reyhane Askari Hemmat · N H · Nika Haghtalab · Simon Lacoste-Julien -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 : Poster Session »
Jonathan Scarlett · Piotr Indyk · Ali Vakilian · Adrian Weller · Partha P Mitra · Benjamin Aubin · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová · Kristina Monakhova · Joshua Yurtsever · Laura Waller · Hendrik Sommerhoff · Michael Moeller · Rushil Anirudh · Shuang Qiu · Xiaohan Wei · Zhuoran Yang · Jayaraman Thiagarajan · Salman Asif · Michael Gillhofer · Johannes Brandstetter · Sepp Hochreiter · Felix Petersen · Dhruv Patel · Assad Oberai · Akshay Kamath · Sushrut Karmalkar · Eric Price · Ali Ahmed · Zahra Kadkhodaie · Sreyas Mohan · Eero Simoncelli · Carlos Fernandez-Granda · Oscar Leong · Wesam Sakla · Rebecca Willett · Stephan Hoyer · Jascha Sohl-Dickstein · Sam Greydanus · Gauri Jagatap · Chinmay Hegde · Michael Kellman · Jonathan Tamir · Nouamane Laanait · Ousmane Dia · Mirco Ravanelli · Jonathan Binas · Negar Rostamzadeh · Shirin Jalali · Tiantian Fang · Alex Schwing · Sébastien Lachapelle · Philippe Brouillard · Tristan Deleu · Simon Lacoste-Julien · Stella Yu · Arya Mazumdar · Ankit Singh Rawat · Yue Zhao · Jianshu Chen · Xiaoyang Li · Hubert Ramsauer · Gabrio Rizzuti · Nikolaos Mitsakos · Dingzhou Cao · Thomas Strohmer · Yang Li · Pei Peng · Gregory Ongie -
2019 Poster: Reducing Noise in GAN Training with Variance Reduced Extragradient »
Tatjana Chavdarova · Gauthier Gidel · François Fleuret · Simon Lacoste-Julien -
2019 Poster: Semi-Parametric Efficient Policy Learning with Continuous Actions »
Victor Chernozhukov · Mert Demirer · Greg Lewis · Vasilis Syrgkanis -
2019 Poster: Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing »
Jonas Mueller · Vasilis Syrgkanis · Matt Taddy -
2019 Poster: Cold Case: The Lost MNIST Digits »
Chhavi Yadav · Leon Bottou -
2019 Spotlight: Cold Case: The Lost MNIST Digits »
Chhavi Yadav · Leon Bottou -
2019 Poster: Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks »
Gauthier Gidel · Francis Bach · Simon Lacoste-Julien -
2019 Poster: Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates »
Sharan Vaswani · Aaron Mishkin · Issam Laradji · Mark Schmidt · Gauthier Gidel · Simon Lacoste-Julien -
2019 Poster: Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments »
Vasilis Syrgkanis · Victor Lei · Miruna Oprescu · Maggie Hei · Keith Battocchi · Greg Lewis -
2019 Spotlight: Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments »
Vasilis Syrgkanis · Victor Lei · Miruna Oprescu · Maggie Hei · Keith Battocchi · Greg Lewis -
2018 : Opening remarks »
Simon Lacoste-Julien · Gauthier Gidel -
2018 Workshop: Causal Learning »
Martin Arjovsky · Christina Heinze-Deml · Anna Klimovskaia · Maxime Oquab · Leon Bottou · David Lopez-Paz -
2018 Poster: Quantifying Learning Guarantees for Convex but Inconsistent Surrogates »
Kirill Struminsky · Simon Lacoste-Julien · Anton Osokin -
2018 Poster: SING: Symbol-to-Instrument Neural Generator »
Alexandre Defossez · Neil Zeghidour · Nicolas Usunier · Leon Bottou · Francis Bach -
2017 : Geometrical Insights for Unsupervised Learning »
Leon Bottou -
2017 : A3T: Adversarially Augmented Adversarial Training »
Aristide Baratin · Simon Lacoste-Julien · Yoshua Bengio · Akram Erraqabi -
2017 : (Invited Talk) Eva Tardos: Online learning with partial information for players in games. »
Eva Tardos -
2017 : On Structured Prediction Theory with Calibrated Convex Surrogate Losses. »
Simon Lacoste-Julien -
2017 : Looking for a Missing Signal »
Leon Bottou -
2017 Workshop: Learning in the Presence of Strategic Behavior »
Nika Haghtalab · Yishay Mansour · Tim Roughgarden · Vasilis Syrgkanis · Jennifer Wortman Vaughan -
2017 Poster: Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization »
Fabian Pedregosa · Rémi Leblond · Simon Lacoste-Julien -
2017 Poster: On Structured Prediction Theory with Calibrated Convex Surrogate Losses »
Anton Osokin · Francis Bach · Simon Lacoste-Julien -
2017 Poster: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Spotlight: Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization »
Fabian Pedregosa · Rémi Leblond · Simon Lacoste-Julien -
2017 Spotlight: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Oral: On Structured Prediction Theory with Calibrated Convex Surrogate Losses »
Anton Osokin · Francis Bach · Simon Lacoste-Julien -
2017 Poster: Welfare Guarantees from Data »
Darrell Hoy · Denis Nekipelov · Vasilis Syrgkanis -
2017 Poster: Robust Optimization for Non-Convex Objectives »
Robert S Chen · Brendan Lucier · Yaron Singer · Vasilis Syrgkanis -
2017 Poster: A Sample Complexity Measure with Applications to Learning Optimal Auctions »
Vasilis Syrgkanis -
2017 Oral: Robust Optimization for Non-Convex Objectives »
Robert S Chen · Brendan Lucier · Yaron Singer · Vasilis Syrgkanis -
2017 Poster: Stabilizing Training of Generative Adversarial Networks through Regularization »
Kevin Roth · Aurelien Lucchi · Sebastian Nowozin · Thomas Hofmann -
2016 : Discussion panel »
Ian Goodfellow · Soumith Chintala · Arthur Gretton · Sebastian Nowozin · Aaron Courville · Yann LeCun · Emily Denton -
2016 : Training Generative Neural Samplers using Variational Divergence »
Sebastian Nowozin -
2016 : Welcome »
David Lopez-Paz · Alec Radford · Leon Bottou -
2016 Workshop: Adversarial Training »
David Lopez-Paz · Leon Bottou · Alec Radford -
2016 Poster: Learning in Games: Robustness of Fast Convergence »
Dylan Foster · zhiyuan li · Thodoris Lykouris · Karthik Sridharan · Eva Tardos -
2016 Poster: Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits »
Vasilis Syrgkanis · Haipeng Luo · Akshay Krishnamurthy · Robert Schapire -
2016 Poster: PAC-Bayesian Theory Meets Bayesian Inference »
Pascal Germain · Francis Bach · Alexandre Lacoste · Simon Lacoste-Julien -
2015 Workshop: Optimization for Machine Learning (OPT2015) »
Suvrit Sra · Alekh Agarwal · Leon Bottou · Sashank J. Reddi -
2015 Poster: No-Regret Learning in Bayesian Games »
Jason Hartline · Vasilis Syrgkanis · Eva Tardos -
2015 Poster: On the Global Linear Convergence of Frank-Wolfe Optimization Variants »
Simon Lacoste-Julien · Martin Jaggi -
2015 Poster: Barrier Frank-Wolfe for Marginal Inference »
Rahul G Krishnan · Simon Lacoste-Julien · David Sontag -
2015 Poster: Variance Reduced Stochastic Gradient Descent with Neighbors »
Thomas Hofmann · Aurelien Lucchi · Simon Lacoste-Julien · Brian McWilliams -
2015 Poster: Fast Convergence of Regularized Learning in Games »
Vasilis Syrgkanis · Alekh Agarwal · Haipeng Luo · Robert Schapire -
2015 Oral: Fast Convergence of Regularized Learning in Games »
Vasilis Syrgkanis · Alekh Agarwal · Haipeng Luo · Robert Schapire -
2015 Poster: Rethinking LDA: Moment Matching for Discrete ICA »
Anastasia Podosinnikova · Francis Bach · Simon Lacoste-Julien -
2014 Workshop: Learning Semantics »
Cedric Archambeau · Antoine Bordes · Leon Bottou · Chris J Burges · David Grangier -
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · Chen-Yu Lee · Rich M Schwartz -
2014 Poster: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives »
Aaron Defazio · Francis Bach · Simon Lacoste-Julien -
2013 Workshop: NIPS 2013 Workshop on Causality: Large-scale Experiment Design and Inference of Causal Mechanisms »
Isabelle Guyon · Leon Bottou · Bernhard Schölkopf · Alexander Statnikov · Evelyne Viegas · james m robins -
2013 Poster: Memory Limited, Streaming PCA »
Ioannis Mitliagkas · Constantine Caramanis · Prateek Jain -
2011 Workshop: Learning Semantics »
Antoine Bordes · Jason E Weston · Ronan Collobert · Leon Bottou -
2009 Workshop: The Generative and Discriminative Learning Interface »
Simon Lacoste-Julien · Percy Liang · Guillaume Bouchard -
2008 Poster: DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification »
Simon Lacoste-Julien · Fei Sha · Michael Jordan -
2007 Tutorial: Learning Using Many Examples »
Leon Bottou · Andrew W Moore -
2007 Poster: The Tradeoffs of Large Scale Learning »
Leon Bottou · Olivier Bousquet