Timezone: »

Conditional Random Field Autoencoders for Unsupervised Structured Prediction
Waleed Ammar · Chris Dyer · Noah A Smith

Tue Dec 09 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D

We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observed data using a feature-rich conditional random field (CRF). Then a reconstruction of the input is (re)generated, conditional on the latent structure, using a generative model which factorizes similarly to the CRF. The autoencoder formulation enables efficient exact inference without resorting to unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate insightful connections to traditional autoencoders, posterior regularization and multi-view learning. Finally, we show competitive results with instantiations of the framework for two canonical tasks in natural language processing: part-of-speech induction and bitext word alignment, and show that training our model can be substantially more efficient than comparable feature-rich baselines.

Author Information

Waleed Ammar (CMU)
Chris Dyer (DeepMind)
Noah A Smith (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors