Skip to yearly menu bar Skip to main content


Demonstration

Babble Labble: Learning from Natural Language Explanations

Braden Hancock · Paroma Varma · Percy Liang · Christopher RĂ© · Stephanie Wang

Pacific Ballroom Concourse #D8

Abstract:

We introduce Babble Labble, a system for converting natural language explanations into massive training sets with probabilistic labels. In this demo, users will be shown unlabeled examples for a simple relation extraction task (identifying mentions of spouses in the news). For each example, instead of providing a label, users provide a sentence describing one reason why the given example should receive a certain label. These explanations are parsed into executable functions in real-time and applied to the unlabeled dataset. We use data programming to resolve conflicts between the functions and combine their weak labels into a single probabilistic label per example. This large weakly labeled training set is then used to train a discriminative model that improves generalization as it includes features never mentioned in the small set of explanations. Using the explanations the user wrote, we calculate the final quality of the complete system, finding in most cases that one to two dozen explanations achieve the same quality as hundreds or thousands of labels.

Live content is unavailable. Log in and register to view live content