Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

Imbalance-Regularized LoRA: A Plug-and-Play Method for Improving Fine-Tuning of Foundation Models

Zhenyu Zhu · Yongtao Wu · Quanquan Gu · Volkan Cevher


Abstract: Low-Rank Adaptation (LoRA) is an effective fine-tuning algorithm for large models, enabling efficient adaptation with fewer trainable parameters. Despite its success, there remains significant potential for improving LoRA's performance. In this paper, we introduce iLoRA (Imbalance-Regularized LoRA), which enhances LoRA by incorporating a regularization term to capture the imbalance in forward propagation. This regularization maintains an imbalance between matrices A and B, ensuring stable activation variance independent of dimension. Specifically, we first analyze forward dynamics, observe this imbalance in stable training, and introduce imbalanced regularization. Further, by combining this with preconditioning techniques [Zhang and Pilanci, 2024], we propose π-LoRA (Preconditioned iLoRA), which improves the backpropagation process. Our method is a plug-and-play algorithm that requires only minor modifications to the existing code and incurs negligible additional computational overhead. Finally, experiments on large language models and text-to-image models demonstrate that iLoRA and π-LoRA significantly outperform existing LoRA and preconditioned LoRA methods.

Chat is not available.