Poster
(FL)${}^2$: Overcoming Few Labels in Federated Semi-Supervised Learning
Seungjoo Lee · Thanh-Long V. Le · Jaemin Shin · Sung-Ju Lee
East Exhibit Hall A-C #3603
Abstract:
Federated Learning (FL) is a distributed machine learning framework that trains accurate global models while preserving clients' privacy-sensitive data. However, most FL approaches assume that clients possess labeled data, which is often not the case in practice. Federated Semi-Supervised Learning (FSSL) addresses this label deficiency problem, targeting situations where only the server has a small amount of labeled data while clients do not. Nonetheless, a significant performance gap exists between Centralized Semi-Supervised Learning (SSL) and FSSL. This gap arises from confirmation bias, which is more pronounced in FSSL due to the multiple local training epochs and the separation of labeled and unlabeled data. We propose (FL)${}^2$, a robust training method for unlabeled clients using sharpness-aware consistency regularization. We discover that regularizing the original pseudo-labeling loss is suboptimal, hence we carefully select unlabeled samples for regularization. We further introduce client-specific adaptive thresholding and learning status-aware aggregation to adjust the training process based on the learning progress of each client. Our experiments on two benchmark datasets demonstrate that our approach significantly improves performance and bridges the gap with SSL, particularly in scenarios with scarce labeled data.
Live content is unavailable. Log in and register to view live content