Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Robustness in Sequence Modeling

An Invariant Learning Characterization of Controlled Text Generation

Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei


Abstract:

Controlled generation refers to the problem of creating text that contains stylistic or semantic attributes of interest. Many approaches reduce this problem to building a predictor of the desired attribute.For example, researchers hoping to deploy a large language model to produce non-toxic content may use a toxicity classifier to filter generated text. In this paper, we show that the performance of controlled generation may be poor if the target distribution of text differs from the distribution the predictor was trained on. Instead, we take inspiration from causal representation learning and cast controlled generation under distribution shift as an invariant learning problem: the most effective predictor should be invariant across multiple text environments. Experiments demonstrate the promise and difficulty of adapting invariant learning methods, which have been primarily developed for vision, to text.

Chat is not available.