Timezone: »

 
Poster
Identifiability and Unmixing of Latent Parse Trees
Percy Liang · Sham M Kakade · Daniel Hsu

Wed Dec 05 07:00 PM -- 11:59 PM (PST) @ Harrah’s Special Events Center 2nd Floor

This paper explores unsupervised learning of parsing models along two directions. First, which models are identifiable from infinite data? We use a general technique for numerically checking identifiability based on the rank of a Jacobian matrix, and apply it to several standard constituency and dependency parsing models. Second, for identifiable models, how do we estimate the parameters efficiently? EM suffers from local optima, while recent work using spectral methods cannot be directly applied since the topology of the parse tree varies across sentences. We develop a strategy, unmixing, which deals with this additional complexity for restricted classes of parsing models.

Author Information

Percy Liang (Stanford University)
Percy Liang

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

Sham M Kakade (Harvard University & Amazon)
Daniel Hsu (Columbia University)

See <https://www.cs.columbia.edu/~djhsu/>

More from the Same Authors