Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Human Evaluation of Generative Models

Operationalizing Specifications, In Addition to Test Sets for Evaluating Constrained Generative Models

Vikas Raunak · Matt Post · Arul Menezes


Abstract:

In this work we present some recommendations on the evaluation of state of the art generative models for constrained generation tasks. The progress on generative models has been rapid in recent years. These large scale models have had three impacts: firstly, the fluency of generation in both language and vision modalities has rendered common average-case evaluation metrics much less useful in diagnosing system errors. Secondly, the user expectations around these models and their feted public releases have made the technical problem of out of domain generalization less useful. Thirdly, the same substrate models now form the basis of a number of applications, driven both by the utility of their representations as well as phenomena such as in-context learning. Consequently, our evaluation methodologies haven't adapted to these changes. More concretely, while the methods of interacting with models have seen a rise in their abstraction-level, a similar rise has not been observed in the evaluation practices. In this paper, we argue that the scale of generative models could be exploited to raise the abstraction level at which evaluation itself is conducted and provide recommendations for the same. Our recommendations are based on leveraging specifications as a powerful instrument to evaluate generation quality and are readily applicable to a variety of tasks.

Chat is not available.