Timezone: »
The use of language-model-based question-answering systems to aid humans in completing difficult tasks is limited, in part, by the unreliability of the text these systems generate. Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.
Author Information
Alicia Parrish (Google)
Harsh Trivedi (Stony Brook University)
Nikita Nangia (NYU)
Jason Phang (New York University)
Vishakh Padmakumar (New York University)
Amanpreet Singh Saimbhi (New York University)
Samuel Bowman (NYU + Anthropic)
More from the Same Authors
-
2022 : Sam Bowman: What's the deal with AI safety? »
Samuel Bowman -
2022 : EleutherAI: Going Beyond "Open Science" to "Science in the Open" »
Jason Phang · Herbie Bradley · Leo Gao · Louis Castricato · Stella Biderman -
2022 : EleutherAI: Going Beyond "Open Science" to "Science in the Open" »
Jason Phang · Herbie Bradley · Leo Gao · Louis Castricato · Stella Biderman -
2022 Workshop: Human Evaluation of Generative Models »
Divyansh Kaushik · Jennifer Hsia · Jessica Huynh · Yonadav Shavit · Samuel Bowman · Ting-Hao Huang · Douwe Kiela · Zachary Lipton · Eric Michael Smith -
2021 : Invited talk 9 »
Samuel Bowman -
2021 Panel: The Role of Benchmarks in the Scientific Progress of Machine Learning »
Lora Aroyo · Samuel Bowman · Isabelle Guyon · Joaquin Vanschoren -
2019 Poster: Can Unconditional Language Models Recover Arbitrary Sentences? »
Nishant Subramani · Samuel Bowman · Kyunghyun Cho -
2019 Poster: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems »
Alex Wang · Yada Pruksachatkun · Nikita Nangia · Amanpreet Singh · Julian Michael · Felix Hill · Omer Levy · Samuel Bowman -
2019 Spotlight: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems »
Alex Wang · Yada Pruksachatkun · Nikita Nangia · Amanpreet Singh · Julian Michael · Felix Hill · Omer Levy · Samuel Bowman