Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Federated Learning: Recent Advances and New Challenges

Efficient Federated Random Subnetwork Training

Francesco Pase · Berivan Isik · Deniz Gunduz · Tsachy Weissman · Michele Zorzi


Abstract: One main challenge in federated learning is the large communication cost of exchanging weight updates from clients to the server at each round. While prior work has made great progress in compressing the weight updates through gradient compression methods, we propose a radically different approach that does not update the weights. Instead, our method freezes the weights at their initial random values and learns how to sparsify the random network for the best performance. To this end, the clients collaborate in training a \emph{stochastic} binary mask to find the optimal random sparse network within the original one. At the end of the training, the final model is a randomly weighted sparse network -- or a subnetwork inside the random dense network. We show improvements in accuracy, communication bitrate (less than $1$ bit per parameter (bpp)), convergence speed, and final model size (less than $1$ bpp) over relevant baselines on MNIST, EMNIST, CIFAR-10, and CIFAR-100 datasets, in the low bitrate regime under various system configurations.

Chat is not available.