Timezone: »
Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might no longer align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors, and propose a self regularization loss to decrease the negative impact of noisy neighbors. Furthermore, to aggregate information with more context, we consider expanded neighborhoods with small affinity values. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets. Code is available in https://github.com/Albert0147/SFDA_neighbors.
Author Information
Shiqi Yang (Computer Vision Center Barcelona)
On how to efficiently adapt the pretrained model to real world environment under domain and category shift.
yaxing wang (Centre de Visió per Computador (CVC))
Joost van de Weijer (Computer Vision Center Barcelona)
Luis Herranz (Computer Vision Center)
Shangling Jui (Huawei)
Dr. Jui is the chief AI scientist of Huawei Kirin team. His knowledge on AI and reinforcement learning has guided the team to build the eco-system of Kirin platform. He support decisions and investment of AI to Canadian universities including UBC, SFU, UofToronto, UofAlberta, UofWaterloo, etc., through joint lab collaborations and local Huawei offices.
More from the Same Authors
-
2023 Poster: AutoGO: Automated Computation Graph Optimization for Neural Network Evolution »
Mohammad Salameh · Keith Mills · Negar Hassanpour · Fred Han · Shuting Zhang · Wei Lu · Shangling Jui · CHUNHUA ZHOU · Fengyu Sun · Di Niu -
2023 Poster: FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning »
Dipam Goswami · Yuyang Liu · Bartłomiej Twardowski · Joost van de Weijer -
2023 Poster: Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing »
kai wang · Fei Yang · Shiqi Yang · Muhammad Atif Butt · Joost van de Weijer -
2022 Workshop: Vision Transformers: Theory and applications »
Fahad Shahbaz Khan · Gul Varol · Salman Khan · Ping Luo · Rao Anwer · Ashish Vaswani · Hisham Cholakkal · Niki Parmar · Joost van de Weijer · Mubarak Shah -
2022 Spotlight: Lightning Talks 1B-4 »
Andrei Atanov · Shiqi Yang · Wanshan Li · Yongchang Hao · Ziquan Liu · Jiaxin Shi · Anton Plaksin · Jiaxiang Chen · Ziqi Pan · yaxing wang · Yuxin Liu · Stepan Martyanov · Alessandro Rinaldo · Yuhao Zhou · Li Niu · Qingyuan Yang · Andrei Filatov · Yi Xu · Liqing Zhang · Lili Mou · Ruomin Huang · Teresa Yeo · kai wang · Daren Wang · Jessica Hwang · Yuanhong Xu · Qi Qian · Hu Ding · Michalis Titsias · Shangling Jui · Ajay Sohmshetty · Lester Mackey · Joost van de Weijer · Hao Li · Amir Zamir · Xiangyang Ji · Antoni Chan · Rong Jin -
2022 Spotlight: Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation »
Shiqi Yang · yaxing wang · kai wang · Shangling Jui · Joost van de Weijer -
2022 Poster: Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation »
Shiqi Yang · yaxing wang · kai wang · Shangling Jui · Joost van de Weijer -
2021 Poster: Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization »
Ke Sun · Yafei Wang · Yi Liu · yingnan zhao · Bo Pan · Shangling Jui · Bei Jiang · Linglong Kong -
2020 Poster: DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs »
yaxing wang · Lu Yu · Joost van de Weijer -
2020 Poster: RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning »
Riccardo Del Chiaro · Bartłomiej Twardowski · Andrew Bagdanov · Joost van de Weijer -
2019 : Coffee + Posters »
Benjamin Caine · Renhao Wang · Nazmus Sakib · Nana Otawara · Meha Kaushik · elmira amirloo · Nemanja Djuric · Johanna Rock · Tanmay Agarwal · Angelos Filos · Panagiotis Tigkas · Donsuk Lee · Wootae Jeon · Nikita Jaipuria · Pin Wang · Jinxin Zhao · Liangjun Zhang · Ashutosh Singh · Ershad Banijamali · Mohsen Rohani · Aman Sinha · Ameya Joshi · Ching-Yao Chan · Mohammed Abdou · Changhao Chen · Jong-Chan Kim · eslam mohamed · Matt OKelly · Nirvan Singhania · Hiroshi Tsukahara · Atsushi Keyaki · Praveen Palanisamy · Justin Norden · Micol Marchetti-Bowick · Yiming Gu · Hitesh Arora · Shubhankar Deshpande · Jeff Schneider · Shangling Jui · Vaneet Aggarwal · Tryambak Gangopadhyay · Qiaojing Yan -
2018 Poster: Image-to-image translation for cross-domain disentanglement »
Abel Gonzalez-Garcia · Joost van de Weijer · Yoshua Bengio -
2018 Poster: Memory Replay GANs: Learning to Generate New Categories without Forgetting »
Chenshen Wu · Luis Herranz · Xialei Liu · yaxing wang · Joost van de Weijer · Bogdan Raducanu -
2011 Poster: Portmanteau Vocabularies for Multi-Cue Image Representation »
Fahad S Khan · Joost van de Weijer · Andrew D Bagdanov · Maria Vanrell