Timezone: »

Improving Topic Coherence with Regularized Topic Models
David Newman · Edwin Bonilla · Wray Buntine

Wed Dec 14 08:45 AM -- 02:59 PM (PST) @

Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.

Author Information

David Newman (University of California, Irvine)
Edwin Bonilla (CSIRO's Data61)
Wray Buntine

More from the Same Authors