Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Multi-Task Learning with Self-Supervised Objectives can Improve Worst-Group Outcomes

Atharva Kulkarni · Lucio M Dery · Amrith Setlur · Aditi Raghunathan · Ameet Talwalkar · Graham Neubig


Abstract: In order to create machine learning systems that serve a variety of users well, it is important to not only achieve high performance on average but also ensure equitable outcomes across diverse groups. In this paper, we explore the potential of multi-task learning (MTL) with self-supervised objectives as a tool to address the challenge of group-wise fairness. We show that by regularizing the joint representation space during multi-tasking, we are able to obtain improvements on worst-group error. Through comprehensive experiments across NLP and CV datasets, we demonstrate that regularized multi-tasking with self-supervised learning competes favorably with state-of-the-art distributionally robust optimization methods. Our approach -- without introducing data external to the end-task -- improves worst-case group accuracy over empirical risk minimization by as much as $\sim4\%$ on average in settings where group annotations are completely unavailable.

Chat is not available.