Skip to yearly menu bar Skip to main content


Zoom presentation
in
Competition: NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day

Invited Speaker: Sourab Mangrulka -- Generative AI for Al: 🤗 PEFT: Finetuning made simple, efficient and extendable

SOURAB MANGRULKAR

[ ]
Fri 15 Dec 12:45 p.m. PST — 1 p.m. PST

Abstract:

Generative AI is now becoming part and parcel of everyone’s daily life. Large Langage Models such as ChatGPT/GPT4, PaLM, Claude, Llama, Mistral, Falcon and StarCoder are at the core of this owing to their state-of-the-art performance at various Natural Language Processing (NLP) tasks, conversational skills and logical reasoning/coding. The conventional paradigm followed is to pretrain the model on web-scale data followed by finetuning on downstream tasks to get the best performance. The finetuning step becomes infeasible as models get larger due to insufficient access to dedicated hardware thereby preventing widespread availability and usage of these models. Parameter-Efficient Fine-tuning (PEFT) methods enable efficient adaptation of pre-trained language models to various downstream applications without fine-tuning all the model's parameters while maintaining performance. 🤗 PEFT is an open-source project with the vision to democratize access to fine-tuning large AI models on consumer hardware/low-resources while being simple, efficient and adaptable at scale. Here, I will present the development and design considerations that went into building 🤗 PEFT and how it fits in the Generative AI landscape.

Chat is not available.