Timezone: »
Generative Adversarial Networks (aka GANs) are a recently proposed approach for learning samplers of high-dimensional distributions with intricate structure, such as distributions over natural images, given samples from these distributions. They are trained by setting up a two-player zero-sum game between two neural networks, which learn statistics of a target distribution by adapting their strategies in the game using gradient descent. Despite their intriguing performance in practice, GANs pose great challenges to both Optimization and Statistics. Their training suffers from oscillations, and they are difficult to scale to high-dimensional settings. We study how game-theoretic and statistical techniques can be brought to bare on these important challenges. We use Game Theory towards improving GAN training, and Statistics towards scaling up the dimensionality of the generated distributions.
Author Information
Constantinos Daskalakis (MIT)
More from the Same Authors
-
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 : Spotlight 4: Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 Poster: Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 Poster: Efficient Truncated Linear Regression with Unknown Noise Variance »
Constantinos Daskalakis · Patroklos Stefanou · Rui Yao · Emmanouil Zampetakis -
2021 Oral: Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2020 Poster: Tight last-iterate convergence rates for no-regret learning in multi-player games »
Noah Golowich · Sarath Pattathil · Constantinos Daskalakis -
2020 Poster: Truncated Linear Regression in High Dimensions »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Poster: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Spotlight: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Poster: Independent Policy Gradient Methods for Competitive Reinforcement Learning »
Constantinos Daskalakis · Dylan Foster · Noah Golowich -
2018 Poster: Learning and Testing Causal Models with Interventions »
Jayadev Acharya · Arnab Bhattacharyya · Constantinos Daskalakis · Saravanan Kandasamy -
2018 Poster: Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons »
Nima Anari · Constantinos Daskalakis · Wolfgang Maass · Christos Papadimitriou · Amin Saberi · Santosh Vempala -
2018 Poster: HOGWILD!-Gibbs can be PanAccurate »
Constantinos Daskalakis · Nishanth Dikkala · Siddhartha Jayanti -
2018 Poster: The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization »
Constantinos Daskalakis · Ioannis Panageas -
2017 Poster: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data »
Constantinos Daskalakis · Nishanth Dikkala · Gautam Kamath -
2015 Poster: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath -
2015 Spotlight: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath