Timezone: »

End-to-End Weak Supervision
Salva Rühling Cachay · Benedikt Boecking · Artur Dubrawski

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ Virtual

Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model.To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.

Author Information

Salva Rühling Cachay (Technical University of Darmstadt)
Benedikt Boecking (Carnegie Mellon University)

I'm a PhD student in Robotics at Carnegie Mellon University, where I'm a member of the Auton Lab advised by Artur Dubrawski. I am interested in the technical and theoretical aspects of how we engage domain experts in building and training Machine Learning models. In my current research projects I develop methods for data exploration (semi-supervised clustering) and label acquisition (active learning, interactive learning). In the past, I have also worked on algorithms, tools, and data analysis to help fight sex trafficking using deep web and dark web data.

Artur Dubrawski (Carnegie Mellon University)

More from the Same Authors