Skip to yearly menu bar Skip to main content


Poster

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

Sebastian Nowozin · Botond Cseke · Ryota Tomioka

Area 5+6+7+8 #106

Keywords: [ Variational Inference ] [ (Other) Probabilistic Models and Methods ] [ (Other) Unsupervised Learning Methods ] [ Deep Learning or Neural Networks ]


Abstract: Generative neural networks are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any $f$-divergence can be used for training generative neural networks. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.

Live content is unavailable. Log in and register to view live content