Skip to yearly menu bar Skip to main content


Poster

Can LLMs Learn by Teaching? A Preliminary Study

Xuefei Ning · Zifu Wang · Shiyao Li · Zinan Lin · Peiran Yao · Tianyu Fu · Matthew Blaschko · Guohao Dai · Huazhong Yang · Yu Wang

West Ballroom A-D #6605
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Teaching to improve student models (e.g., knowledge distillation) is an extensively studied methodology in LLMs. However, for humans, teaching not only improves students but also improves teachers. We ask: Can LLMs also learn by teaching (LbT)? If yes, we can potentially unlock the possibility of continuously advancing the models without solely relying on human-produced data or stronger models. In this paper, we provide a preliminary exploration of this ambitious agenda. We show that LbT ideas can be easily incorporated into existing LLM training/prompting pipelines and provide noticeable improvements. Specifically, we design three methods, each mimicking one of the three levels of LbT in humans: observing students’ feedback, learning from the feedback, and learning iteratively, with the goals of improving answer accuracy without training and improving models’ inherent capability with fine-tuning. The findings are rather encouraging. For example, similar to LbT in human, we see that: (1) LbT can induce weak-to-strong generalization: strong models can improve themselves by teaching other weak models; (2) Diversity in students is important: teaching multiple students could be better than teaching one student or the teacher itself. We hope that this early promise can inspire future research on LbT and more broadly adopting the advanced techniques in education to improve LLMs.

Live content is unavailable. Log in and register to view live content