Timezone: »

Distributed Machine Learning with Sparse Heterogeneous Data
Dominic Richards · Sahand Negahban · Patrick Rebeschini

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @

Motivated by distributed machine learning settings such as Federated Learning, we consider the problem of fitting a statistical model across a distributed collection of heterogeneous data sets whose similarity structure is encoded by a graph topology. Precisely, we analyse the case where each node is associated with fitting a sparse linear model, and edges join two nodes if the difference of their solutions is also sparse. We propose a method based on Basis Pursuit Denoising with a total variation penalty, and provide finite sample guarantees for sub-Gaussian design matrices. Taking the root of the tree as a reference node, we show that if the sparsity of the differences across nodes is smaller than the sparsity at the root, then recovery is successful with fewer samples than by solving the problems independently, or by using methods that rely on a large overlap in the signal supports, such as the group Lasso. We consider both the noiseless and noisy setting, and numerically investigate the performance of distributed methods based on Distributed Alternating Direction Methods of Multipliers (ADMM) and hyperspectral unmixing.

Author Information

Dominic Richards (University of Oxford)
Sahand Negahban (Yale University)
Patrick Rebeschini (University of Oxford)

More from the Same Authors