`

Timezone: »

 
Poster
Labelling unlabelled videos from scratch with multi-modal self-supervision
Yuki Asano · Mandela Patrick · Christian Rupprecht · Andrea Vedaldi

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #211

A large part of the current success of deep learning lies in the effectiveness of data -- more precisely: of labeled data. Yet, labelling a dataset with human annotation continues to carry high costs, especially for videos. While in the image domain, recent methods have allowed to generate meaningful (pseudo-) labels for unlabelled datasets without supervision, this development is missing for the video domain where learning feature representations is the current focus. In this work, we a) show that unsupervised labelling of a video dataset does not come for free from strong feature encoders and b) propose a novel clustering method that allows pseudo-labelling of a video dataset without any human annotations, by leveraging the natural correspondence between audio and visual modalities. An extensive analysis shows that the resulting clusters have high semantic overlap to ground truth human labels. We further introduce the first benchmarking results on unsupervised labelling of common video datasets.

Author Information

Yuki Asano (University of Oxford)
Mandela Patrick (University of Oxford)
Christian Rupprecht (University of Oxford)
Andrea Vedaldi (University of Oxford / Facebook AI Research)

More from the Same Authors