Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

A Comparison of Equivariant Vision Models with ImageNet Pre-training

David Klee · Jung Yeon Park · Robert Platt · Robin Walters


Abstract:

Neural networks pre-trained on large datasets provide useful embeddings for downstream tasks and allow researchers to iterate with less compute. For computer vision tasks, ImageNet pre-trained models can be easily downloaded for fine-tuning.However, no such pre-trained models are available that are equivariant to image transformations. In this work, we implement several equivariant versionsof the residual network architecture and publicly release the weights aftertraining on ImageNet. Additionally, we perform a comparison of enforced vs.learned equivariance in the largest data regime to date.

Chat is not available.