Timezone: »
Recent generative models such as generative adversarial networks have achieved remarkable success in generating realistic images, but they require large training datasets and computational resources. The goal of few-shot image generation is to learn the distribution of a new dataset from only a handful of examples by transferring knowledge learned from structurally similar datasets. Towards achieving this goal, we propose the “Implicit Support Set Autoencoder” (ISSA) that adversarially learns the relationship across datasets using an unsupervised dataset representation, while the distribution of each individual dataset is learned using implicit distributions. Given a few examples from a new dataset, ISSA can generate new samples by inferring the representation of the underlying distribution using a single forward pass. We showcase significant gains from our method on generating high quality and diverse images for unseen classes in the Omniglot and CelebA datasets in few-shot image generation settings.
Author Information
Shenyang Huang (McGill University, Mila)
I am a phd student at Mila and McGill University, supervised by Professor Reihaneh Rabbany and Professor Guillaume Rabusseau.
Kuan-Chieh Wang (University of Toronto)
Guillaume Rabusseau (Mila - Université de Montréal)
Alireza Makhzani (University of Toronto)
More from the Same Authors
-
2021 Spotlight: Lower and Upper Bounds on the Pseudo-Dimension of Tensor Network Models »
Behnoush Khavari · Guillaume Rabusseau -
2021 : Your Dataset is a Multiset and You Should Compress it Like One »
Daniel Severo · James Townsend · Ashish Khisti · Alireza Makhzani · Karen Ullrich -
2022 : DrML: Diagnosing and Rectifying Vision Models using Language »
Yuhui Zhang · Jeff Z. HaoChen · Shih-Cheng Huang · Kuan-Chieh Wang · James Zou · Serena Yeung -
2022 : Panel »
Vikas Garg · Pan Li · Srijan Kumar · Emanuele Rossi · Shenyang Huang -
2022 Workshop: Temporal Graph Learning Workshop »
Reihaneh Rabbany · Jian Tang · Michael Bronstein · Shenyang Huang · Meng Qu · Kellin Pelrine · Jianan Zhao · Farimah Poursafaei · Aarash Feizi -
2022 Poster: Towards Better Evaluation for Dynamic Link Prediction »
Farimah Poursafaei · Shenyang Huang · Kellin Pelrine · Reihaneh Rabbany -
2021 : Your Dataset is a Multiset and You Should Compress it Like One »
Daniel Severo · James Townsend · Ashish Khisti · Alireza Makhzani · Karen Ullrich -
2021 Poster: Lower and Upper Bounds on the Pseudo-Dimension of Tensor Network Models »
Behnoush Khavari · Guillaume Rabusseau -
2021 Poster: Variational Model Inversion Attacks »
Kuan-Chieh Wang · YAN FU · Ke Li · Ashish Khisti · Richard Zemel · Alireza Makhzani -
2021 Poster: Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation »
Jixuan Wang · Kuan-Chieh Wang · Frank Rudzicz · Michael Brudno -
2020 : Invited Talk 9 Q&A by Guillaume »
Guillaume Rabusseau -
2020 : Invited Talk 9: Tensor Network Models for Structured Data »
Guillaume Rabusseau -
2020 : Panel Discussion 1: Theoretical, Algorithmic and Physical »
Jacob Biamonte · Ivan Oseledets · Jens Eisert · Nadav Cohen · Guillaume Rabusseau · Xiao-Yang Liu -
2017 Poster: PixelGAN Autoencoders »
Alireza Makhzani · Brendan J Frey -
2017 Poster: Dualing GANs »
Yujia Li · Alex Schwing · Kuan-Chieh Wang · Richard Zemel -
2017 Spotlight: Dualing GANs »
Yujia Li · Alex Schwing · Kuan-Chieh Wang · Richard Zemel -
2017 Poster: Hierarchical Methods of Moments »
Matteo Ruffini · Guillaume Rabusseau · Borja Balle -
2017 Poster: Multitask Spectral Learning of Weighted Automata »
Guillaume Rabusseau · Borja Balle · Joelle Pineau -
2016 Poster: Low-Rank Regression with Tensor Responses »
Guillaume Rabusseau · Hachem Kadri -
2015 Poster: Winner-Take-All Autoencoders »
Alireza Makhzani · Brendan J Frey