`

Timezone: »

 
Poster
Learning to summarize with human feedback
Nisan Stiennon · Long Ouyang · Jeffrey Wu · Daniel Ziegler · Ryan Lowe · Chelsea Voss · Alec Radford · Dario Amodei · Paul Christiano

Tue Dec 08 09:00 PM -- 11:00 PM (PST) @ Poster Session 2 #722

As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.

Author Information

Nisan Stiennon (OpenAI)
Long Ouyang (OpenAI)
Jeffrey Wu (OpenAI)
Daniel Ziegler (OpenAI)

I work at OpenAI on AI alignment: how can we make techniques for learning human values that will scale robustly to superhuman learning systems and task performance?

Ryan Lowe (OpenAI)
Chelsea Voss (OpenAI)
Alec Radford (OpenAI)
Dario Amodei (OpenAI)
Paul Christiano (OpenAI)

More from the Same Authors

  • 2020 Workshop: Cooperative AI »
    Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach
  • 2020 Poster: Language Models are Few-Shot Learners »
    Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared D Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel Herbert-Voss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei
  • 2020 Oral: Language Models are Few-Shot Learners »
    Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared D Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel Herbert-Voss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei
  • 2019 Workshop: Retrospectives: A Venue for Self-Reflection in ML Research »
    Ryan Lowe · Yoshua Bengio · Joelle Pineau · Michela Paganini · Jessica Forde · Shagun Sodhani · Abhishek Gupta · Joel Lehman · Peter Henderson · Kanika Madan · Koustuv Sinha · Xavier Bouthillier
  • 2018 Workshop: Emergent Communication Workshop »
    Jakob Foerster · Angeliki Lazaridou · Ryan Lowe · Igor Mordatch · Douwe Kiela · Kyunghyun Cho
  • 2018 Poster: Reward learning from human preferences and demonstrations in Atari »
    Borja Ibarz · Jan Leike · Tobias Pohlen · Geoffrey Irving · Shane Legg · Dario Amodei
  • 2017 Poster: Deep Reinforcement Learning from Human Preferences »
    Paul Christiano · Jan Leike · Tom Brown · Miljan Martic · Shane Legg · Dario Amodei
  • 2017 Poster: Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments »
    Ryan Lowe · YI WU · Aviv Tamar · Jean Harb · OpenAI Pieter Abbeel · Igor Mordatch
  • 2016 Workshop: Adversarial Training »
    David Lopez-Paz · Leon Bottou · Alec Radford
  • 2016 Poster: Improved Techniques for Training GANs »
    Tim Salimans · Ian Goodfellow · Wojciech Zaremba · Vicki Cheung · Alec Radford · Peter Chen · Xi Chen