Timezone: »

 
Few Shot Image Generation via Implicit Autoencoding of Support Sets
Shenyang Huang · Kuan-Chieh Wang · Guillaume Rabusseau · Alireza Makhzani
Event URL: https://openreview.net/forum?id=fem00ckyS8t »

Recent generative models such as generative adversarial networks have achieved remarkable success in generating realistic images, but they require large training datasets and computational resources. The goal of few-shot image generation is to learn the distribution of a new dataset from only a handful of examples by transferring knowledge learned from structurally similar datasets. Towards achieving this goal, we propose the “Implicit Support Set Autoencoder” (ISSA) that adversarially learns the relationship across datasets using an unsupervised dataset representation, while the distribution of each individual dataset is learned using implicit distributions. Given a few examples from a new dataset, ISSA can generate new samples by inferring the representation of the underlying distribution using a single forward pass. We showcase significant gains from our method on generating high quality and diverse images for unseen classes in the Omniglot and CelebA datasets in few-shot image generation settings.

Author Information

Shenyang Huang (McGill University, Mila)

I am a phd student at Mila and McGill University, supervised by Professor Reihaneh Rabbany and Professor Guillaume Rabusseau.

Kuan-Chieh Wang (University of Toronto)
Guillaume Rabusseau (Mila - Université de Montréal)
Alireza Makhzani (University of Toronto)

More from the Same Authors