NeurIPS 2023
Skip to yearly menu bar Skip to main content


Workshop

Instruction Tuning and Instruction Following

Qinyuan Ye · Yizhong Wang · Shayne Longpre · Yao Fu · Daniel Khashabi

Room 220 - 222
[ Abstract ] Workshop Website
Fri 15 Dec, 6:30 a.m. PST

Recent advancements in training large language models (LLMs) to follow “instructions” have significantly increased their ability to comprehend open-ended language commands, encompassing a wide range of needs, preferences, and values.

This remarkable transformation has led to the creation of remarkable industrial models such as GPT-4 and Bard, as well as an increased focus within the open-source and research communities: creating new benchmark and resources, developing new training methods, and understanding the limitations of these methods. Furthermore, instruction following powered by LLMs has proven to be effective in multi-modal settings, with applications in image editing and robotic command execution.

We organize this workshop to facilitate discussions on advancing instruction tuning methodologies and constructing general-purpose instruction-following models. We believe it is crucial to organize this workshop due to the prevalence of proprietary models with restricted access, thereby creating the need for an open platform to encourage discussions. Moreover, we aim to foster interdisciplinary collaboration by bringing together researchers from diverse fields such as natural language processing, computer vision, robotics, human-computer interaction, AI safety, among others, to share their latest findings and explore potential avenues for future research.

Chat is not available.
Timezone: America/Los_Angeles

Schedule