Skip to yearly menu bar Skip to main content


Poster

SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors

Vijay Chandra Lingam · Atula Neerkaje · Aditya Vavre · Aneesh Shetty · Gautham Krishna Gudur · Joydeep Ghosh · Eunsol Choi · Alex Dimakis · Aleksandar Bojchevski · Sujay Sanghavi

East Exhibit Hall A-C #2207
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights (\mathbf{W}) and inject learnable matrices (\mathbf{\Delta W}). These (\mathbf{\Delta W}) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, a \textit{simple} approach that fundamentally differs from existing methods: the structure imposed on (\mathbf{\Delta W}) depends on the specific weight matrix (\mathbf{W}). Specifically, SVFT updates (\mathbf{W}) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to 96% of full fine-tuning performance while training only 0.006% to 0.25\% of parameters, outperforming existing methods that only recover up to 85% performance using 0.03% to 0.8% of the trainable parameter budget.

Live content is unavailable. Log in and register to view live content