Skip to yearly menu bar Skip to main content


Poster

Slice-based Learning: A Programming Model for Residual Learning in Critical Data Slices

Vincent Chen · Sen Wu · Alexander Ratner · Jen Weng · Christopher RĂ©

East Exhibition Hall B + C #67

Keywords: [ Representation Learning ] [ Multitask and Transfer Learning ] [ Algorithms ]


Abstract:

In real-world machine learning applications, data subsets correspond to especially critical outcomes: vulnerable cyclist detections are safety-critical in an autonomous driving task, and "question" sentences might be important to a dialogue agent's language understanding for product purposes. While machine learning models can achieve quality performance on coarse-grained metrics like F1-score and overall accuracy, they may underperform on these critical subsets---we define these as slices, the key abstraction in our approach. To address slice-level performance, practitioners often train separate "expert" models on slice subsets or use multi-task hard parameter sharing. We propose Slice-based Learning, a new programming model in which the slicing function (SF), a programmer abstraction, is used to specify additional model capacity for each slice. Any model can leverage SFs to learn slice-specific representations, which are combined with an attention mechanism to make slice-aware predictions. We show that our approach improves over baselines in terms of computational complexity and slice-specific performance by up to 19.0 points, and overall performance by up to 4.6 F1 points on applications spanning natural language understanding and computer vision benchmarks as well as production-scale industrial systems.

Live content is unavailable. Log in and register to view live content