Skip to yearly menu bar Skip to main content


In-person presentation
in
Competition: NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day

Invited Speaker: Sebastian Raschka (lightning.ai) - LoRA in Action: Insights from Finetuning LLMs with Low-Rank Adaptation

Sebastian Raschka

[ ]
Fri 15 Dec noon PST — 12:15 p.m. PST

Abstract:

Low-rank adaptation (LoRA) stands as one of the most popular and effective methods for efficiently training custom Large Language Models (LLMs). As practitioners of open-source LLMs, we regard LoRA as a crucial technique in our toolkit. In this talk, I will delve into some practical insights gained from running hundreds of experiments with LoRA, addressing questions such as: How much can I save with quantized LoRA? Are Adam optimizers memory-intensive? Should we train for multiple epochs? How do we choose the LoRA rank? Moreover, the talk will include ideas for future experiments and talking points to stimulate discussions in the workshop, such as mechanisms to avoid overfitting in LoRA and strategies for combining LoRA weights from multiple experiments.

Chat is not available.