`

Timezone: »

 
Poster
Learning to See by Looking at Noise
Manel Baradad · Jonas Wulff · Tongzhou Wang · Phillip Isola · Antonio Torralba

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None

Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper we go a step further and ask if we can do away with real image datasets entirely, instead learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models.Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property to learn good representations.

Author Information

Manel Baradad (Massachusetts Institute of Technology)
Jonas Wulff (Computer Science and Artificial Intelligence Laboratory, Electrical Engineering & Computer Science)
Tongzhou Wang (MIT)
Phillip Isola (Massachusetts Institute of Technology)
Antonio Torralba (Massachusetts Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors