David Mimno, Amr Ahmed, Jordan L Boyd-Graber, Ankur Moitra, Hanna Wallach, Alex J Smola, David Blei, Anima Anandkumar

Cornell University; Research at Google; University of Maryland; IAS; Microsoft; Carnegie Mellon; Columbia University; UC Irvine

Workshop Speakers

Alex J Smola, Anima Anandkumar

Carnegie Mellon; UC Irvine

Workshop: Topic Models: Computation, Application, and Evaluation

7:30am – 6:30pm Tuesday, December 10, 2013

Harvey's Emerald Bay B

7:30-7:35 Introduction
7:35-8:20 Invited talk: Dave Blei
8:20-8:40 Contributed talk:
Margaret Roberts, Brandon Stewart, Dustin Tingley and Edoardo Airoldi.
"The Structural Topic Model and Applied Social Science"
8:40-9:00 Contributed talk:
Frank Rosner, Alexander Hinneburg, Michael Röder, Martin Nettling and Andreas Both.
"Evaluating topic coherence measures"
9:00-9:30 Break and Poster session 1
9:30-10:15 Invited talk: Alex Smola
10:15-10:30 Poster session 1 (continued)

15:30-15:45 Poster session 2
15:45-16:30 Invited talk: Anima Anandkumar
16:30-16:50 Contributed talk:
Jason Chuang, Yuening Hu, Ashley Jin, John D. Wilkerson, Daniel A. McFarland, Christopher D. Manning and Jeffrey Heer.
"Document Exploration with Topic Modeling: Designing Interactive Visualizations to Support Effective Analysis Workflows"
16:50-17:20 Break and Poster session 2 (continued)
17:20-17:40 Contributed talk:
Furong Huang, Niranjan U N and Animashree Anandkumar.
"Fast Detection of Overlapping Communities via Online Tensor Methods"
17:40-18:30 Group discussion

Since the most recent NIPS topic model workshop in 2010, interest in statistical topic modeling has continued to grow in a wide range of research areas, from theoretical computer science to English literature. The goal of this workshop, which marks the 10th anniversary of the original LDA NIPS paper, is to bring together researchers from the NIPS community and beyond to share results, ideas, and perspectives.

We will organize the workshop around the following three themes:

Computation: The computationally intensive process of training topic models has been a useful testbed for novel inference methods in machine learning, such as stochastic variational inference and spectral inference. Theoretical computer scientists have used LDA as a test case to begin to establish provable bounds in unsupervised machine learning. This workshop will provide a forum for researchers developing new inference methods and theoretical analyses to present work in progress, as well as for practitioners to learn about state of the art research in efficient and provable computing.

Applications: Topic models are now commonly used in a broad array of applications to solve real-world problems, from questions in digital humanities and computational social science to e-commerce and government science policy. This workshop will share new application areas, and discuss our experiences adapting general tools to the particular needs of different settings. Participants will look for commonalities between diverse applications, while also using the particular challenges of each application to define theoretical research agendas.

Evaluation: A key strength of topic modeling is its exceptional capability for exploratory analysis, but evaluating such use can be challenging: there may be no single right answer. As topic models become widely used outside machine learning, it becomes increasingly important to find evaluation strategies that match user needs. The workshop will focus both on the specifics of individual evaluation metrics and the more general process of iteratively criticizing and improving models. We will also consider questions of interface design, visualization, and user experience.

Program committee (confirmed):

Edo Airoldi (Harvard), Laura Dietz (UMass), Jacob Eisenstein (GTech), Justin Grimmer (Stanford), Yoni Halpern (NYU), Daniel Hsu (Columbia), Brendan O'Connor (CMU), Michael Paul (JHU), Eric Ringger (BYU), Brandon Stewart (Harvard), Chong Wang (CMU), Sinead Williamson (UT-Austin)