Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision

Invited Talk #2 - Disentangling Faithfulness and Extractiveness in Abstractive Summarization (He He)


Abstract:

Title: Disentangling faithfulness and extractiveness in abstractive summarization

Abstract: Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors. While prior work has proposed various methods that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs (i.e. copying more words from the document) or truly better understanding of the document. In this talk, I will discuss the faithfulness-abstractive trade-off in summarization and a better method for evaluating faithfulness that accounts for the change in extractiveness. We then show that it is possible to mitigate the faithfulness-abstractiveness trade-off by controling the level of extractiveness during generation.

Bio: He He is an assistant professor in the Center for Data Science and Courant Institute at New York University. Her research interests include robust language understanding, text generation and interactive NLP systems. She obtained her Ph.D. from University of Maryland, College Park and worked as a post-doc at Stanford University before joining NYU.