Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Federated Learning: Recent Advances and New Challenges

To Federate or Not To Federate: Incentivizing Client Participation in Federated Learning

Yae Jee Cho · Divyansh Jhunjhunwala · Tian Li · Virginia Smith · Gauri Joshi


Abstract: Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model without directly sharing their local data. Although there is an abundance of research on improving the speed, efficiency, and accuracy of federated training, most works implicitly assume that all clients are willing to participate in the FL framework. Due to data heterogeneity, however, the global model may not work well for some clients, and they may instead choose to use their own local model. Such disincentivization of clients can be problematic from the server's perspective because having more participating clients yields a better global model, and offers better privacy guarantees to the participating clients. In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model by dynamically adjusting the aggregation weights assigned to their updates. Our experiments show that IncFL increases the number of incentivized clients by $30$-$55\%$ compared to standard federated training algorithms, and can also improve the generalization performance of the global model on unseen clients.

Chat is not available.