Timezone: »

A Primal Dual Formulation For Deep Learning With Constraints
Yatin Nandwani · Abhishek Pathak · Mausam · Parag Singla

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #204

For several problems of interest, there are natural constraints which exist over the output label space. For example, for the joint task of NER and POS labeling, these constraints might specify that the NER label ‘organization’ is consistent only with the POS labels ‘noun’ and ‘preposition’. These constraints can be a great way of injecting prior knowledge into a deep learning model, thereby improving overall performance. In this paper, we present a constrained optimization formulation for training a deep network with a given set of hard constraints on output labels. Our novel approach first converts the label constraints into soft logic constraints over probability distributions outputted by the network. It then converts the constrained optimization problem into an alternating min-max optimization with Lagrangian variables defined for each constraint. Since the constraints are independent of the target labels, our framework easily generalizes to semi-supervised setting. We experiment on the tasks of Semantic Role Labeling (SRL), Named Entity Recognition (NER) tagging, and fine-grained entity typing and show that our constraints not only significantly reduce the number of constraint violations, but can also result in state-of-the-art performance

Author Information

Yatin Nandwani (Indian Institute Of Technology Delhi)
Abhishek Pathak (Indian Institute Of Technology, Delhi)
Mausam (IIT Dehli)
Parag Singla (Indian Institute of Technology Delhi)

More from the Same Authors