Timezone: »
A central challenge in training classification models in the real-world federated system is learning with non-IID data. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server. Other works also share public datasets or synthesized samples to supplement the training of under-represented classes or introduce a certain level of personalization. Though effective, they lack a deep understanding of how the data heterogeneity affects each layer of a deep classification model. In this paper, we bridge this gap by performing an experimental analysis of the representations learned by different layers. Our observations are surprising: (1) there exists a greater bias in the classifier than other layers, and (2) the classification performance can be significantly improved by post-calibrating the classifier after federated training. Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model. Experimental results demonstrate that CCVR achieves state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10. We hope that our simple yet effective method can shed some light on the future research of federated learning with non-IID data.
Author Information
Mi Luo (National University of Singapore)
Fei Chen (Huawei Noah's Ark Lab)
Dapeng Hu (National University of Singapore)
Yifan Zhang (National University of Singapore)
Jian Liang (CASIA)
Jiashi Feng (UC Berkeley)
More from the Same Authors
-
2021 : How Well Does Self-Supervised Pre-Training Perform with Streaming ImageNet? »
Dapeng Hu · · Qizhengqiu Lu · Lanqing Hong · Hailin Hu · Yifan Zhang · Zhenguo Li · Jiashi Feng -
2021 : Architecture Personalization in Resource-constrained Federated Learning »
Mi Luo · Fei Chen · Zhenguo Li · Jiashi Feng -
2022 Poster: Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks »
Jiyang Guan · Jian Liang · Ran He -
2022 Spotlight: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Spotlight: Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks »
Jiyang Guan · Jian Liang · Ran He -
2022 Spotlight: Lightning Talks 3A-1 »
Shu Ding · Wanxing Chang · Jiyang Guan · Mouxiang Chen · Guan Gui · Yue Tan · Shiyun Lin · Guodong Long · Yuze Han · Wei Wang · Zhen Zhao · Ye Shi · Jian Liang · Chenghao Liu · Lei Qi · Ran He · Jie Ma · Zemin Liu · Xiang Li · Hoang Tuan · Luping Zhou · Zhihua Zhang · Jianling Sun · Jingya Wang · LU LIU · Tianyi Zhou · Lei Wang · Jing Jiang · Yinghuan Shi -
2022 Poster: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Poster: Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition »
Yifan Zhang · Bryan Hooi · Lanqing Hong · Jiashi Feng -
2022 Poster: Sharpness-Aware Training for Free »
JIAWEI DU · Daquan Zhou · Jiashi Feng · Vincent Tan · Joey Tianyi Zhou -
2021 : Contributed Talk 3: Architecture Personalization in Resource-constrained Federated Learning »
Mi Luo · Fei Chen · Zhenguo Li · Jiashi Feng -
2021 Poster: Towards Understanding Why Lookahead Generalizes Better Than SGD and Beyond »
Pan Zhou · Hanshu Yan · Xiaotong Yuan · Jiashi Feng · Shuicheng Yan -
2021 Poster: All Tokens Matter: Token Labeling for Training Better Vision Transformers »
Zi-Hang Jiang · Qibin Hou · Li Yuan · Daquan Zhou · Yujun Shi · Xiaojie Jin · Anran Wang · Jiashi Feng -
2021 Poster: Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning »
Yifan Zhang · Bryan Hooi · Dapeng Hu · Jian Liang · Jiashi Feng -
2021 Poster: Direct Multi-view Multi-person 3D Pose Estimation »
tao wang · Jianfeng Zhang · Yujun Cai · Shuicheng Yan · Jiashi Feng -
2019 : Coffee Break + Poster Session I »
Wei-Hung Weng · Simon Kohl · Aiham Taleb · Arijit Patra · Khashayar Namdar · Matthias Perkonigg · Shizhan Gong · Abdullah-Al-Zubaer Imran · Amir Abdi · Ilja Manakov · Johannes C. Paetzold · Ben Glocker · Dushyant Sahoo · Shreyas Fadnavis · Karsten Roth · Xueqing Liu · Yifan Zhang · Alexander Preuhs · Fabian Eitel · Anusua Trivedi · Tomer Weiss · Darko Stern · Liset Vazquez Romaguera · Johannes Hofmanninger · Aakash Kaku · Oloruntobiloba Olatunji · Anastasia Razdaibiedina · Tao Zhang -
2019 Poster: Multi-marginal Wasserstein GAN »
Jiezhang Cao · Langyuan Mo · Yifan Zhang · Kui Jia · Chunhua Shen · Mingkui Tan -
2014 Poster: Robust Logistic Regression and Classification »
Jiashi Feng · Huan Xu · Shie Mannor · Shuicheng Yan -
2013 Poster: Online Robust PCA via Stochastic Optimization »
Jiashi Feng · Huan Xu · Shuicheng Yan -
2013 Poster: Online PCA for Contaminated Data »
Jiashi Feng · Huan Xu · Shie Mannor · Shuicheng Yan