Timezone: »
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs). However, current understanding of SN's efficacy is limited. In this work, we show that SN controls two important failure modes of GAN training: exploding and vanishing gradients. Our proofs illustrate a (perhaps unintentional) connection with the successful LeCun initialization. This connection helps to explain why the most popular implementation of SN for GANs requires no hyper-parameter tuning, whereas stricter implementations of SN have poor empirical performance out-of-the-box. Unlike LeCun initialization which only controls gradient vanishing at the beginning of training, SN preserves this property throughout training. Building on this theoretical understanding, we propose a new spectral normalization technique: Bidirectional Scaled Spectral Normalization (BSSN), which incorporates insights from later improvements to LeCun initialization: Xavier initialization and Kaiming initialization. Theoretically, we show that BSSN gives better gradient control than SN. Empirically, we demonstrate that it outperforms SN in sample quality and training stability on several benchmark datasets.
Author Information
Zinan Lin (Carnegie Mellon University)
Vyas Sekar (Carnegie Mellon University)
Giulia Fanti (CMU)
More from the Same Authors
-
2022 : Distributional Privacy for Data Sharing »
Zinan Lin · Shuaiqi Wang · Vyas Sekar · Giulia Fanti -
2018 Poster: Robustness of conditional GANs to noisy labels »
Kiran Thekumparampil · Ashish Khetan · Zinan Lin · Sewoong Oh -
2018 Spotlight: Robustness of conditional GANs to noisy labels »
Kiran Thekumparampil · Ashish Khetan · Zinan Lin · Sewoong Oh -
2018 Poster: PacGAN: The power of two samples in generative adversarial networks »
Zinan Lin · Ashish Khetan · Giulia Fanti · Sewoong Oh