Topic models are becoming increasingly relevant probabilistic models for dimensionality reduction of text data, inferring topics that capture meaningful themes of frequently co-occurring terms. We formulate topic modelling as an information retrieval task, where the goal is, based on the latent topic representation, to capture relevant term co-occurrence patterns. We evaluate performance for this task rigorously with regard to two types of errors, false negatives and positives, based on the well-known precision-recall trade-off and provide a statistical model that allows the user to balance between the contributions of the different error types. When the user focuses solely on the contribution of false negatives ignoring false positives altogether our proposed model reduces to a standard topic model. Extensive experiments demonstrate the proposed approach is effective and infers more coherent topics than existing related approaches.
Seppo Virtanen (University of Cambridge)
Mark Girolami (Imperial College London)
More from the Same Authors
2019 Poster: Multi-resolution Multi-task Gaussian Processes »
Oliver Hamelijnck · Theodoros Damoulas · Kangrui Wang · Mark Girolami
2017 Poster: Probabilistic Models for Integration Error in the Assessment of Functional Cardiac Models »
Chris Oates · Steven Niederer · Angela Lee · François-Xavier Briol · Mark Girolami