Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
Vijay Chandra Lingam · Atula Neerkaje · Aditya Vavre · Aneesh Shetty · Gautham Krishna Gudur · Joydeep Ghosh · Alex Dimakis · Eunsol Choi · Aleksandar Bojchevski · Sujay Sanghavi
Abstract:
Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights and inject learnable matrices . These matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, which enables a trade-off between the number of trainable parameters and model expressivity by allowing a flexible number of off-diagonal interactions between singular vectors in , distinguishing it from previous SVD-based methods. This approach provides fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks demonstrate that SVFT recovers up to \% of full fine-tuning performance while training only \% of parameters, outperforming existing methods that recover only up to \% performance using \% of the trainable parameter budget.
Chat is not available.