Timezone: »
Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Federated Learning (SFL) and Momentum Contrast (MoCo). In MocoSFL, the large backbone model is split into a small client-side model and a large server-side model, and only the small client-side model is processed locally on the client's local devices. MocoSFL is equipped with three components: (i) vector concatenation which enables the use of small batch size and reduces computation and memory requirements by orders of magnitude; (ii) feature sharing that helps achieve high accuracy regardless of the quality and volume of local data; (iii) frequent synchronization that helps achieve better non-IID performance because of smaller local model divergence.For a 1,000-client case with non-IID data (each client has data from 2 random classes of CIFAR-10), MocoSFL can achieve over 84% accuracy with ResNet-18 model.
Author Information
Jingtao Li (Arizona State University)
Lingjuan Lyu (Sony AI)
Daisuke Iso (Sony AI)
Chaitali Chakrabarti (Arizona State University)
Michael Spranger (Sony)
More from the Same Authors
-
2022 Poster: CalFAT: Calibrated Federated Adversarial Training with Label Skewness »
Chen Chen · Yuchen Liu · Xingjun Ma · Lingjuan Lyu -
2022 : The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL »
Badr AlKhamissi · Muhammad ElNokrashy · Michael Spranger -
2022 : Feasible and Desirable Counterfactual Generation by Preserving Human Defined Constraints »
Homayun Afrabandpey · Michael Spranger -
2022 : The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL »
Badr AlKhamissi · Muhammad ElNokrashy · Michael Spranger -
2022 Poster: Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization »
Zijie Zhang · Yang Zhou · Xin Zhao · Tianshi Che · Lingjuan Lyu -
2022 Poster: CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks »
Xuanli He · Qiongkai Xu · Yi Zeng · Lingjuan Lyu · Fangzhao Wu · Jiwei Li · Ruoxi Jia -
2022 Poster: FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning »
Tao Qi · Fangzhao Wu · Chuhan Wu · Lingjuan Lyu · Tong Xu · Hao Liao · Zhongliang Yang · Yongfeng Huang · Xing Xie -
2022 Poster: DENSE: Data-Free One-Shot Federated Learning »
Jie Zhang · Chen Chen · Bo Li · Lingjuan Lyu · Shuang Wu · Shouhong Ding · Chunhua Shen · Chao Wu -
2022 Poster: Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling »
Junyuan Hong · Lingjuan Lyu · Jiayu Zhou · Michael Spranger -
2021 Poster: Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning »
Xinyi Xu · Lingjuan Lyu · Xingjun Ma · Chenglin Miao · Chuan Sheng Foo · Bryan Kian Hsiang Low -
2021 Poster: Anti-Backdoor Learning: Training Clean Models on Poisoned Data »
Yige Li · Xixiang Lyu · Nodens Koren · Lingjuan Lyu · Bo Li · Xingjun Ma -
2021 Poster: Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation »
Jinming Cui · Chaochao Chen · Lingjuan Lyu · Carl Yang · Wang Li -
2020 Poster: Assessing SATNet's Ability to Solve the Symbol Grounding Problem »
Oscar Chang · Lampros Flokas · Hod Lipson · Michael Spranger -
2020 Expo Talk Panel: Hypotheses Generation for Applications in Biomedicine and Gastronomy »
Michael Spranger · Kosuke Aoki