Timezone: »
In real-world classification tasks, each class often comprises multiple finer-grained "subclasses." As the subclass labels are frequently unavailable, models trained using only the coarser-grained class labels often exhibit highly variable performance across different subclasses. This phenomenon, known as hidden stratification, has important consequences for models deployed in safety-critical applications such as medicine. We propose GEORGE, a method to both measure and mitigate hidden stratification even when subclass labels are unknown. We first observe that unlabeled subclasses are often separable in the feature space of deep neural networks, and exploit this fact to estimate subclass labels for the training data via clustering techniques. We then use these approximate subclass labels as a form of noisy supervision in a distributionally robust optimization objective. We theoretically characterize the performance of GEORGE in terms of the worst-case generalization error across any subclass. We empirically validate GEORGE on a mix of real-world and benchmark image classification datasets, and show that our approach boosts worst-case subclass accuracy by up to 22 percentage points compared to standard training techniques, without requiring any prior information about the subclasses.
Author Information
Nimit Sohoni (Stanford University)
Jared Dunnmon (Stanford University)
-- Postdoctoral scholar with Prof. Chris Re at Stanford University -- Foci include multi-modal weak supervision and data augmentation
Geoffrey Angus (Stanford University)
Albert Gu (Stanford)
Christopher Ré (Stanford)
More from the Same Authors
-
2021 : Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations »
Michael Zhang · Nimit Sohoni · Hongyang Zhang · Chelsea Finn · Christopher Ré -
2021 : Combining Recurrent, Convolutional, and Continuous-Time Models with Structured Learnable Linear State-Space Layers »
Isys Johnson · Albert Gu · Karan Goel · Khaled Saab · Tri Dao · Atri Rudra · Christopher Ré -
2022 Spotlight: Lightning Talks 2A-4 »
Sarthak Mittal · Richard Grumitt · Zuoyu Yan · Lihao Wang · Dongsheng Wang · Alexander Korotin · Jiangxin Sun · Ankit Gupta · Vage Egiazarian · Tengfei Ma · Yi Zhou · Yishi Xu · Albert Gu · Biwei Dai · Chunyu Wang · Yoshua Bengio · Uros Seljak · Miaoge Li · Guillaume Lajoie · Yiqun Wang · Liangcai Gao · Lingxiao Li · Jonathan Berant · Huang Hu · Xiaoqing Zheng · Zhibin Duan · Hanjiang Lai · Evgeny Burnaev · Zhi Tang · Zhi Jin · Xuanjing Huang · Chaojie Wang · Yusu Wang · Jian-Fang Hu · Bo Chen · Chao Chen · Hao Zhou · Mingyuan Zhou -
2022 Spotlight: Diagonal State Spaces are as Effective as Structured State Spaces »
Ankit Gupta · Albert Gu · Jonathan Berant -
2022 : Panel Discussion: Opportunities and Challenges »
Kenneth Norman · Janice Chen · Samuel J Gershman · Albert Gu · Sepp Hochreiter · Ida Momennejad · Hava Siegelmann · Sainbayar Sukhbaatar -
2022 Poster: On the Parameterization and Initialization of Diagonal State Space Models »
Albert Gu · Karan Goel · Ankit Gupta · Christopher Ré -
2022 Poster: Diagonal State Spaces are as Effective as Structured State Spaces »
Ankit Gupta · Albert Gu · Jonathan Berant -
2022 Poster: S4ND: Modeling Images and Videos as Multidimensional Signals with State Spaces »
Eric Nguyen · Karan Goel · Albert Gu · Gordon Downs · Preey Shah · Tri Dao · Stephen Baccus · Christopher Ré -
2022 Poster: xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture Radar Imagery »
Fernando Paolo · Tsu-ting Tim Lin · Ritwik Gupta · Bryce Goodman · Nirav Patel · Daniel Kuster · David Kroodsma · Jared Dunnmon -
2021 : Alex Ratner and Chris Re - The Future of Data Centric AI »
Christopher Ré -
2021 Poster: Scatterbrain: Unifying Sparse and Low-rank Attention »
Beidi Chen · Tri Dao · Eric Winsor · Zhao Song · Atri Rudra · Christopher Ré -
2021 Poster: Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers »
Albert Gu · Isys Johnson · Karan Goel · Khaled Saab · Tri Dao · Atri Rudra · Christopher Ré -
2020 : Tree Covers: An Alternative to Metric Embeddings »
Roshni Sahoo · Ines Chami · Christopher Ré -
2020 Poster: HiPPO: Recurrent Memory with Optimal Polynomial Projections »
Albert Gu · Tri Dao · Stefano Ermon · Atri Rudra · Christopher Ré -
2020 Spotlight: HiPPO: Recurrent Memory with Optimal Polynomial Projections »
Albert Gu · Tri Dao · Stefano Ermon · Atri Rudra · Christopher Ré -
2020 Poster: From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering »
Ines Chami · Albert Gu · Vaggos Chatziafratis · Christopher Ré -
2018 Poster: Learning Compressed Transforms with Low Displacement Rank »
Anna Thomas · Albert Gu · Tri Dao · Atri Rudra · Christopher Ré -
2017 Poster: Learning to Compose Domain-Specific Transformations for Data Augmentation »
Alexander Ratner · Henry Ehrenberg · Zeshan Hussain · Jared Dunnmon · Christopher Ré -
2016 Poster: Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much »
Bryan He · Christopher M De Sa · Ioannis Mitliagkas · Christopher Ré -
2016 Poster: Data Programming: Creating Large Training Sets, Quickly »
Alexander Ratner · Christopher M De Sa · Sen Wu · Daniel Selsam · Christopher Ré -
2015 : Hardware Trends for High Performance Analytics »
Christopher Ré -
2015 : Taking it Easy »
Christopher Ré -
2015 Spotlight: Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré · Christopher Ré -
2015 Poster: Taming the Wild: A Unified Analysis of Hogwild-Style Algorithms »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré · Christopher Ré