Skip to yearly menu bar Skip to main content


Poster

MOVE: Unsupervised Movable Object Segmentation and Detection

Adam Bielski · Paolo Favaro

Hall J (level 1) #428

Keywords: [ Self-supervised learning ] [ Saliency Detection ] [ Object Discovery ] [ Unsupervised Learning ] [ Object Segmentation ] [ Object Detection ]


Abstract:

We introduce MOVE, a novel method to segment objects without any form of supervision. MOVE exploits the fact that foreground objects can be shifted locally relative to their initial position and result in realistic (undistorted) new images. This property allows us to train a segmentation model on a dataset of images without annotation and to achieve state of the art (SotA) performance on several evaluation datasets for unsupervised salient object detection and segmentation. In unsupervised single object discovery, MOVE gives an average CorLoc improvement of 7.2% over the SotA, and in unsupervised class-agnostic object detection it gives a relative AP improvement of 53% on average. Our approach is built on top of self-supervised features (e.g. from DINO or MAE), an inpainting network (based on the Masked AutoEncoder) and adversarial training.

Chat is not available.