Skip to yearly menu bar Skip to main content


Poster

Trading robust representations for sample complexity through self-supervised visual experience

Andrea Tacchetti · Stephen Voinea · Georgios Evangelopoulos

Room 210 #36

Keywords: [ Deep Autoencoders ] [ Representation Learning ] [ Embedding Approaches ] [ Visual Perception ] [ Few-Shot Learning Approaches ]


Abstract:

Learning in small sample regimes is among the most remarkable features of the human perceptual system. This ability is related to robustness to transformations, which is acquired through visual experience in the form of weak- or self-supervision during development. We explore the idea of allowing artificial systems to learn representations of visual stimuli through weak supervision prior to downstream supervised tasks. We introduce a novel loss function for representation learning using unlabeled image sets and video sequences, and experimentally demonstrate that these representations support one-shot learning and reduce the sample complexity of multiple recognition tasks. We establish the existence of a trade-off between the sizes of weakly supervised, automatically obtained from video sequences, and fully supervised data sets. Our results suggest that equivalence sets other than class labels, which are abundant in unlabeled visual experience, can be used for self-supervised learning of semantically relevant image embeddings.

Live content is unavailable. Log in and register to view live content