Skip to yearly menu bar Skip to main content


Natural language tasks are characterized by extreme ambiguity, scale, and sparsity, and have therefore greatly benefited from statistical techniques in recent years. These same characteristics have also created many exciting challenges and opportunities where basic approaches are stretched to their limits. As one example, models of linguistic phenomena often operate over complex combinatorial structures, such as trees, matchings, or graphs, requiring special techniques and algorithms. As another example, real-world applications regularly encounter severe nonstationarity, where test sets diverge greatly from training sets, requiring effective methods for adaptation. As third example, problems like machine translation often have many correct outputs, of which we are given only one at training time, complicating error-driven methods. This tutorial will both survey recent advances and present open problems which are ripe for collaboration between the NLP and ML communities.

Chat is not available.