Skip to yearly menu bar Skip to main content


Poster

Reading Tea Leaves: How Humans Interpret Topic Models

Jonathan Chang · Jordan Boyd-Graber · Sean Gerrish · Chong Wang · David Blei


Abstract:

Probabilistic topic models are a commonly used tool for analyzing text data, where the latent topic representation is used to perform qualitative evaluation of models and guide corpus exploration. Practitioners typically assume that the latent space is semantically meaningful, but this important property has lacked a quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may actually infer less semantically meaningful topics.

Live content is unavailable. Log in and register to view live content