Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ Grand Ballroom B
Learning with Limited Labeled Data: Weak Supervision and Beyond
Isabelle Augenstein · Stephen Bach · Eugene Belilovsky · Matthew Blaschko · Christoph Lampert · Edouard Oyallon · Emmanouil Antonios Platanios · Alexander Ratner · Christopher Ré





Workshop Home Page

Modern representation learning techniques like deep neural networks have had a major impact both within and beyond the field of machine learning, achieving new state-of-the-art performances with little or no feature engineering on a vast array of tasks. However, these gains are often difficult to translate into real-world settings as they require massive hand-labeled training sets. And in the vast majority of real-world settings, collecting such training sets by hand is infeasible due to the cost of labeling data or the paucity of data in a given domain (e.g. rare diseases in medical applications). In this workshop we focus on techniques for few sample learning and using weaker supervision when large unlabeled datasets are available, as well as theory associated with both.

One increasingly popular approach is to use weaker forms of supervision—i.e. supervision that is potentially noisier, biased, and/or less precise. An overarching goal of such approaches is to use domain knowledge and resources from subject matter experts, but to solicit it in higher-level, lower-fidelity, or more opportunistic ways. Examples include higher-level abstractions such as heuristic labeling rules, feature annotations, constraints, expected distributions, and generalized expectation criteria; noisier or biased labels from distant supervision, crowd workers, and weak classifiers; data augmentation strategies to express class invariances; and potentially mismatched training data such as in multitask and transfer learning settings.

Along with practical methods and techniques for dealing with limited labeled data settings, this workshop will also focus on the theory of learning in this general setting. Although several classic techniques in the statistical learning theory exist which handle the case of few samples and high dimensions, extending these results for example to the recent success of deep learning is still a challenge. How can the theory or the techniques that have gained success in deep learning be adapted to the case of limited labeled data? How can systems designed (and potentially deployed) for large scale learning be adapted to small data settings? What are efficient and practical ways to incorporate prior knowledge?

This workshop will focus on highlighting both practical and theoretical aspects of learning with limited labeled data, including but not limited to topics such as:
- Learning from noisy labels
- “Distant” or heuristic supervision
- Non-standard labels such as feature annotations, distributions, and constraints
-Data augmentation and/or the use of simulated data
- Frameworks that can tackle both very few samples and settings with more data without
extensive intervention.
- Effective and practical techniques for incorporating domain knowledge
- Applications of machine learning for small data problems in medical images and industry

Welcome & Opening Remarks (Talk)
Invited Talk: "Tales from fMRI: Learning from limited labeled data" (Invited Talk)
Invited Talk: Learning from Limited Labeled Data (But a Lot of Unlabeled Data) (Invited Talk)
Contributed Talk 1: "Smooth Neighbors on Teacher Graphs for Semi-supervised Learning" (Contributed Talk)
1-minute Poster Spotlights (Session #1) (Spotlights)
Poster Sessions (Poster Session)
Invited Talk: "Light Supervision of Structured Prediction Energy Networks" (Invited Talk)
Invited Talk: "Forcing Neural Link Predictors to Play by the Rules", Sebastian Riedel (Invited Talk)
Lunch
Panel: Limited Labeled Data in Medical Imaging (Panel)
1-minute Poster Spotlights (Session #2) (Spotlights)
Poster Session / Coffee Break (Poster Session)
Invited Talk: Sample and Computationally Efficient Active Learning Algorithms (Invited Talk)
Contributed Talk 2: "EZLearn: Exploiting Organic Supervision in Large-Scale Data Annotation" (Contributed Talk)
Invited Talk: Overcoming Limited Data with GANs (Invited Talk)
Invited Talk: Sameer Singh, "That Doesn't Make Sense! A Case Study in Actively Annotating Model Explanations" (Invited Talk)
Contributed Talk 3: Local Affine Approximators of Deep Neural Nets for Improving Knowledge Transfer (Contributed Talk)
Contributed Talk 4: Co-trained Ensemble Models for Weakly Supervised Cyberbullying Detection (Contributed Talk)
Invited Talk: What’s so Hard About Natural Language Understanding? (Invited Talk)
Closing Remarks & Awards (Talk)