Skip to yearly menu bar Skip to main content


Oral Poster

HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning

Chunlin Tian · Zhan Shi · Zhijiang Guo · Li Li · Cheng-Zhong Xu

East Exhibit Hall A-C #2407
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 3A: Generative Models
Thu 12 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

Adapting Large Language Models (LLMs) to new tasks through fine-tuning has been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) techniques, such as LoRA. However, these methods often underperform compared to full fine-tuning, particularly in scenarios involving complex datasets. This issue becomes even more pronounced in complex domains, highlighting the need for improved PEFT approaches that can achieve better performance. Through a series of experiments, we have uncovered two critical insights that shed light on the training and parameter inefficiency of LoRA. Building on these insights, we have developed HydraLoRA, a LoRA framework with an asymmetric structure that eliminates the need for domain expertise. Our experiments demonstrate that HydraLoRA outperforms other PEFT approaches, even those that rely on domain knowledge during the training and inference phases. Our anonymous codes are submitted with the paper and will be publicly available. Code is available: https://github.com/Clin0212/HydraLoRA.

Live content is unavailable. Log in and register to view live content