Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

Adaptive Sparse Federated Learning in Large Output Spaces via Hashing

Zhaozhuo Xu · Luyang Liu · Zheng Xu · Anshumali Shrivastava


Abstract: This paper focuses on the on-device training efficiency of federated learning (FL), and demonstrates it is feasible to exploit sparsity in the client to save both computation and memory for deep neural networks with large output space. To this end, we propose a sparse FL scheme using hash-based adaptive sampling algorithm. In this scheme, the server maintains neurons in hash tables. Each client looks up a subset of neurons from the hash table in the server and performs training. With the locality-sensitive hash functions, this scheme could provide valuable negative class neurons with respect to the client data. Moreover, the cheap operations in hashing incur low computation overhead in the sampling. In our empirical evaluation, we show that our approach can save up to $70\%$ on-device computation and memory during FL while maintaining the same accuracy. Moreover, we demonstrate that we could use the savings in the output layer to increase the model capacity and obtain better accuracy with a fixed hardware budget.

Chat is not available.