Skip to yearly menu bar Skip to main content


Poster

FedGMKD: An Efficient Prototype Federated Learning Framework through Knowledge Distillation and Discrepancy-Aware Aggregation

Jianqiao Zhang · Caifeng Shan · Jungong Han

East Exhibit Hall A-C #1311
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Federated Learning (FL) faces significant challenges due to data heterogeneity across distributed clients. To address this, we propose FedGMKD, a novel framework that combines knowledge distillation and differential aggregation for efficient prototype-based personalized FL without the need for public datasets or server-side generative models. FedGMKD introduces Cluster Knowledge Fusion, utilizing Gaussian Mixture Models to generate prototype features and soft predictions on the client side, enabling effective knowledge distillation while preserving data privacy. Additionally, we implement a Discrepancy-Aware Aggregation Technique that weights client contributions based on data quality and quantity, enhancing the global model's generalization across diverse client distributions. Theoretical analysis confirms the convergence of FedGMKD. Extensive experiments on benchmark datasets, including SVHN, CIFAR-10, and CIFAR-100, demonstrate that FedGMKD outperforms state-of-the-art methods, significantly improving both local and global accuracy in non-IID data settings.

Live content is unavailable. Log in and register to view live content