Skip to yearly menu bar Skip to main content


Spotlight Poster

A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Need

Yan Sun · Li Shen · Dacheng Tao

East Exhibit Hall A-C #3505
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

As a popular paradigm for juggling data privacy and collaborative training, federated learning (FL) is flourishing to distributively process the large scale of heterogeneous datasets on edged clients. Due to bandwidth limitations and security considerations, it ingeniously splits the original problem into multiple subproblems to be solved in parallel, which empowers primal dual solutions to great application values in FL. In this paper, we review the recent development of classical federated primal dual methods and point out a serious common defect of such methods in practical scenarios, which we say is a ``dual drift" caused by dual hysteresis of those longstanding inactive clients under partial participation training. To further address this problem, we propose a novel Aligned Federated Primal Dual (A-FedPD) method, which constructs virtual dual updates to align global consensus and local dual variables for those protracted unparticipated local clients. Meanwhile, we provide a comprehensive analysis of the optimization and generalization efficiency for the A-FedPD method on smooth non-convex objectives, which confirms its high efficiency and practicality. Extensive experiments are conducted on several classical FL setups to validate the effectiveness of our proposed method.

Live content is unavailable. Log in and register to view live content