Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Algorithmic Fairness through the Lens of Time

Loss Modeling for Multi-Annotator Datasets

Uthman Jinadu · Jesse Annan · Shanshan Wen · Yi Ding

[ ]
Fri 15 Dec 11 a.m. PST — 11:03 a.m. PST
 
presentation: Algorithmic Fairness through the Lens of Time
Fri 15 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract:

Accounting for the opinions of all annotators of a dataset is critical for fairness. However, when annotating large datasets, individual annotators will frequently provide thousands of ratings which can lead to fatigue. Additionally, these annotation processes can occur over multiple days which can lead to an inaccurate representation of an annotator's opinion over time. To combat this, we propose to learn a more accurate representation of diverse opinions by utilizing multitask learning in conjunction with loss-based label correction. We show that using our novel formulation, we can cleanly separate agreeing and disagreeing annotations. Furthermore, we demonstrate that this modification can improve prediction performance in a single or multi-annotator setting. Lastly, we show that this method remains robust to additional label noise that is applied to subjective data.

Chat is not available.