Skip to yearly menu bar Skip to main content


Poster

DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation

Sunghyeon Woo · Baeseong Park · Byeongwook Kim · Minjung Jo · Se Jung Kwon · Dongsuk Jeon · Dongsoo Lee

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Large language models (LLMs) have achieved significant success across various domains. However, training these LLMs typically involves substantial memory and computational costs during both forward and backward propagation. While parameter-efficient fine-tuning (PEFT) considerably reduces the training memory associated with parameters, it does not address the significant computational costs and activation memory. In this paper, we propose Dropping Backward Propagation (DropBP), a novel approach designed to reduce computational costs and activation memory while maintaining accuracy. DropBP randomly drops layers during backward propagation, which is essentially equivalent to training shallow submodules generated by undropped layers and residual connections. Additionally, DropBP calculates the sensitivity of each layer to assign an appropriate drop rate, thereby stabilizing the training process. DropBP is not only applicable to full fine-tuning but can also be orthogonally integrated with all types of PEFT by dropping layers during backward propagation. Specifically, by using DropBP when fine-tuning LLaMA2-70B through QLoRA, we can reduce training time by 44\%, accelerate convergence to the same perplexity by 1.5$\times$, and enable training with a sequence length 6.2$\times$ larger on a single NVIDIA-A100 80GiB GPU. The code is available at [https://anonymous.4open.science/r/dropbp-neurips2024](https://anonymous.4open.science/r/dropbp-neurips2024).

Live content is unavailable. Log in and register to view live content