Timezone: »

MocoSFL: enabling cross-client collaborative self-supervised learning
Jingtao Li · Lingjuan Lyu · Daisuke Iso · Chaitali Chakrabarti · Michael Spranger
Event URL: https://openreview.net/forum?id=E4bicqjhurh »

Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Federated Learning (SFL) and Momentum Contrast (MoCo). In MocoSFL, the large backbone model is split into a small client-side model and a large server-side model, and only the small client-side model is processed locally on the client's local devices. MocoSFL is equipped with three components: (i) vector concatenation which enables the use of small batch size and reduces computation and memory requirements by orders of magnitude; (ii) feature sharing that helps achieve high accuracy regardless of the quality and volume of local data; (iii) frequent synchronization that helps achieve better non-IID performance because of smaller local model divergence.For a 1,000-client case with non-IID data (each client has data from 2 random classes of CIFAR-10), MocoSFL can achieve over 84% accuracy with ResNet-18 model.

Author Information

Jingtao Li (Arizona State University)
Lingjuan Lyu (Sony AI)
Daisuke Iso (Sony AI)
Chaitali Chakrabarti (Arizona State University)
Michael Spranger (Sony)

More from the Same Authors