Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision

Invited Talk #7 - Controllable Text Generation with Multiple Constraints (Yulia Tsvetkov)

Yulia Tsvetkov


Abstract:

Title: Controllable Text Generation with Multiple Constraints

Abstract: Conditional language generation models produce highly fluent but often unreliable outputs. This motivated a surge of approaches to controlling various attributes of the text that models generate. However, the majority of existing approaches are focused on monolingual settings and on controlling for coarse-grained attributes of text (typically, only one binary attribute). This talk will propose to focus on finer-grained aspects of the generated texts, including in multilingual settings. I will present an algorithm for controllable inference from pretrained models, which aims at rewriting model outputs with multiple sentence-level, fine-grained, monolingual and cross-lingual constraints. I will conclude with discussion of future work.

Bio: Yulia Tsvetkov is an assistant professor at the Paul G. Allen School of Computer Science & Engineering at University of Washington. Her research group works on NLP for social good, multilingual NLP, and language generation. The projects are motivated by a unified goal: to extend the capabilities of human language technology beyond individual populations and across language boundaries, thereby enabling NLP for diverse and disadvantaged users, the users that need it most. Prior to joining UW, Yulia was an assistant professor at Carnegie Mellon University and a postdoc at Stanford. Yulia is a recipient of the Okawa research award, Amazon machine learning research award, Google faculty research award, and multiple NSF awards.