Timezone: »
An important component of autoencoder methods is the method by which the information capacity of the latent representation is minimized or limited. In this work, the rank of the covariance matrix of the codes is implicitly minimized by relying on the fact that gradient descent learning in multi-layer linear networks leads to minimum-rank solutions. By inserting a number of extra linear layers between the encoder and the decoder, the system spontaneously learns representations with a low effective dimension. The model, dubbed Implicit Rank-Minimizing Autoencoder (IRMAE), is simple, deterministic, and learns continuous latent space. We demonstrate the validity of the method on several image generation and representation learning tasks.
Author Information
Li Jing (Facebook AI Research)
Li Jing is a postdoctoral researcher at Facebook AI Research (FAIR), working with Yann LeCun on self-supervised learning. Li is also interested in representation learning, optimization, flow-based model, energy-based model. Before joining FAIR, he obtained his PhD in physics at MIT.
Jure Zbontar (Facebook)
yann lecun (Facebook)
More from the Same Authors
-
2021 : Deep generative models create new and diverse protein structures »
Zeming Lin · Tom Sercu · yann lecun · Alex Rives -
2021 : Deep generative models create new and diverse protein structures »
Zeming Lin · Tom Sercu · yann lecun · Alex Rives