Timezone: »
Stop Wasting My Time! Saving Days of ImageNet and BERT Training with Latest Weight Averaging
Jean Kaddour
Event URL: https://openreview.net/forum?id=0OrABUHZuz »
Training vision or language models on large datasets can take days, if not weeks. We show that averaging the weights of the k latest checkpoints, each collected at the end of an epoch, can speed up the training progression in terms of loss and accuracy by dozens of epochs, corresponding to time savings up to ~68 and ~30 GPU hours when training a ResNet50 on ImageNet and RoBERTa-Base model on WikiText-103, respectively.
Author Information
Jean Kaddour (University College London)
More from the Same Authors
-
2022 : Evaluating the Impact of Geometric and Statistical Skews on Out-Of-Distribution Generalization Performance »
Aengus Lynch · Jean Kaddour · Ricardo Silva -
2022 : Evaluating the Impact of Geometric and Statistical Skews on Out-Of-Distribution Generalization Performance »
Aengus Lynch · Jean Kaddour · Ricardo Silva -
2022 Poster: When Do Flat Minima Optimizers Work? »
Jean Kaddour · Linqing Liu · Ricardo Silva · Matt Kusner -
2021 Poster: Causal Effect Inference for Structured Treatments »
Jean Kaddour · Yuchen Zhu · Qi Liu · Matt Kusner · Ricardo Silva -
2020 Poster: Probabilistic Active Meta-Learning »
Jean Kaddour · Steindor Saemundsson · Marc Deisenroth