Timezone: »
It has been observed that visual classification models often rely mostly on spurious cues such as the image background, which hurts their robustness to distribution changes. To alleviate this shortcoming, we propose to monitor the model's relevancy signal and direct the model to base its prediction on the foreground object.This is done as a finetuning step, involving relatively few samples consisting of pairs of images and their associated foreground masks. Specifically, we encourage the model's relevancy map (i) to assign lower relevance to background regions, (ii) to consider as much information as possible from the foreground, and (iii) we encourage the decisions to have high confidence. When applied to Vision Transformer (ViT) models, a marked improvement in robustness to domain-shifts is observed. Moreover, the foreground masks can be obtained automatically, from a self-supervised variant of the ViT model itself; therefore no additional supervision is required. Our code is available at: https://github.com/hila-chefer/RobustViT.
Author Information
Hila Chefer (Tel Aviv University)
Idan Schwartz (Technion)
Lior Wolf (Tel Aviv University)
More from the Same Authors
-
2022 Poster: What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs »
Tal Shaharabany · Yoad Tewel · Lior Wolf -
2022 Poster: Error Correction Code Transformer »
Yoni Choukroun · Lior Wolf -
2021 Poster: Perceptual Score: What Data Modalities Does Your Model Perceive? »
Itai Gat · Idan Schwartz · Alex Schwing -
2020 Poster: Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies »
Itai Gat · Idan Schwartz · Alex Schwing · Tamir Hazan -
2017 Poster: High-Order Attention Models for Visual Question Answering »
Idan Schwartz · Alex Schwing · Tamir Hazan