Negative Momentum for Improved Game Dynamics
Reyhane Askari Hemmat
2018 Contributed Talk
in
Workshop: Smooth Games Optimization and Machine Learning
in
Workshop: Smooth Games Optimization and Machine Learning
Abstract
Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics are more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. Next, we show empirically that alternating gradient updates with a negative momentum term achieves convergence on the notoriously difficult to train saturating GANs.
Chat is not available.
Successful Page Load