Timezone: »

Multi-View Learning of Word Embeddings via CCA
Paramveer Dhillon · Dean P Foster · Lyle Ungar

Mon Dec 12 10:00 AM -- 02:59 PM (PST) @ None #None

Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks. However, most current approaches are slow to train, do not model context of the word, and lack theoretical grounding. In this paper, we present a new learning method, Low Rank Multi-View Learning (LR-MVL) which uses a fast spectral method to estimate low dimensional context-specific word representations from unlabeled data. These representation features can then be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed convergence to a global optimum, is theoretically elegant, and achieves state-of-the-art performance on named entity recognition (NER) and chunking problems.

Author Information

Paramveer Dhillon (University of Pennsylvania)
Dean P Foster (University of Pennsylvania)
Lyle Ungar (University of Pennsylvania)

More from the Same Authors