Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Instruction Tuning and Instruction Following

Invited Talk 4 - Sara Hooker

[ ]
Fri 15 Dec noon PST — 12:30 p.m. PST

Abstract:

Title: Challenges and Open Opportunities in Instruction Tuning: The Case Study of AYA

Abstract: In this talk, to frame many of the open challenges and opportunities with instruction tuning, I'll share some of the lessons learned and open questions spurred by AYA. AYA is a year long open science endeavor aimed at building a multilingual language model via instruction tuning that harnesses the collective wisdom and contributions of independent researchers across the world. It aims to improve coverage of instruction finetuned datasets for 101 languages around the world. The project was initiated by Cohere For AI as a multi-institutional collaboration with researchers, engineers, linguists, social scientists, and lifelong learners from over 100 countries around the world. I'll use this setting as a springboard to discuss a wider set of research directions on instruction finetuning optimization approaches.

Bio: Sara Hooker leads Cohere For AI, a non-profit research lab that seeks to solve complex machine learning problems. Cohere For AI supports fundamental research that explores the unknown, and is focused on creating more points of entry into machine learning research. With a long track-record of impactful research at Google Brain, Sara brings a wealth of knowledge from across machine learning. Her work has focused on model efficiency training techniques and optimizing for models that fulfill multiple desired criteria -- interpretable, efficient, fair and robust. Before Cohere For AI, she was the founder of Delta Analytics, a non-profit that brings together researchers, data scientists, and software engineers to volunteer their skills for non-profits around the world.

Chat is not available.