Timezone: »
Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the budget of most organizations. As a result, some research directions become the exclusive domain of a few large industrial and even fewer academic actors. To alleviate this disparity, smaller groups may pool their computational resources and run collaborative experiments that benefit all participants. This paradigm, known as grid- or volunteer computing, has seen successful applications in numerous scientific areas. However, using this approach for machine learning is difficult due to high latency, asymmetric bandwidth, and several challenges unique to volunteer computing. In this work, we carefully analyze these constraints and propose a novel algorithmic framework designed specifically for collaborative training. We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost. Finally, we provide a detailed report of successful collaborative language model pretraining with nearly 50 participants.
Author Information
Michael Diskin (Yandex, Higher School of Economics)
Alexey Bukhtiyarov (Moscow Institute of Physics and Technology)
Max Ryabinin (Yandex, Higher School of Economics)
Lucile Saulnier (Hugging Face)
quentin lhoest (Hugging Face)
Anton Sinitsin (Yandex)
Software developer in Yandex, Dl researcher
Dmitry Popov (Higher School of Economics)
Dmitry V. Pyrkin (National Research University Higher School of Economics)
Maxim Kashirin (Higher School of Economics)
Alexander Borzunov (HSE University, Yandex)
Albert Villanova del Moral (CNRS)
Denis Mazur (Yandex)
Badge Text
Ilia Kobelev (Moscow Institute of Physics and Technology)
Yacine Jernite (Hugging Face)
Thomas Wolf (🤗 Hugging Face)
Gennady Pekhimenko (University of Toronto)
More from the Same Authors
-
2021 Poster: Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices »
Max Ryabinin · Eduard Gorbunov · Vsevolod Plokhotnyuk · Gennady Pekhimenko -
2021 Poster: Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets »
Max Ryabinin · Andrey Malinin · Mark Gales -
2021 : Training Transformers Together »
Alexander Borzunov · Max Ryabinin · Tim Dettmers · quentin lhoest · Lucile Saulnier · Michael Diskin · Yacine Jernite · Thomas Wolf -
2020 Poster: Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts »
Max Ryabinin · Anton Gusev -
2020 Poster: Movement Pruning: Adaptive Sparsity by Fine-Tuning »
Victor Sanh · Thomas Wolf · Alexander Rush -
2020 : An introduction to transfer learning in NLP and HuggingFace »
Thomas Wolf -
2019 Poster: Beyond Vector Spaces: Compact Data Representation as Differentiable Weighted Graphs »
Denis Mazur · Vage Egiazarian · Stanislav Morozov · Artem Babenko