Timezone: »
In machine learning, we traditionally evaluate the performance of a single model, averaged over a collection of test inputs. In this work, we propose a new approach: we measure the performance of a collection of models when evaluated at single input point. Specifically, we study a point's profile: the relationship between models' average performance on the test distribution and their pointwise performance on this individual point. We find that profiles can yield new insights into the structure of both models and data---in and out-of-distribution. For example, we empirically show that real data distributions consist of points with qualitatively different profiles. On one hand, there are compatible'' points with strong correlation between the pointwise and average performance. On the other hand, there are points with weak and even *negative* correlation: cases where improving overall model accuracy actually *hurts* performance on these inputs. As an application, we use profiles to construct a dataset we call CIFAR-10-NEG: a subset of CINIC-10 such that for standard models, accuracy on CIFAR-10-NEG is *negatively correlated* with CIFAR-10 accuracy. Illustrating for the first time an OOD dataset that completely inverts
accuracy-on-the-line'' (Miller et al., 2021).
Author Information
Gal Kaplun (Harvard University)
Nikhil Ghosh (UC Berkeley)
Saurabh Garg (Carnegie Mellon University)
Boaz Barak (Harvard University)
Preetum Nakkiran (Harvard)
More from the Same Authors
-
2021 Spotlight: Mixture Proportion Estimation and PU Learning:A Modern Approach »
Saurabh Garg · Yifan Wu · Alexander Smola · Sivaraman Balakrishnan · Zachary Lipton -
2021 : Leveraging Unlabeled Data to Predict Out-of-Distribution Performance »
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton · Behnam Neyshabur · Hanie Sedghi -
2022 : Downstream Datasets Make Surprisingly Good Pretraining Corpora »
Kundan Krishna · Saurabh Garg · Jeffrey Bigham · Zachary Lipton -
2022 : APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations »
Elan Rosenfeld · Preetum Nakkiran · Hadi Pouransari · Oncel Tuzel · Fartash Faghri -
2022 : Gradient dynamics of single-neuron autoencoders on orthogonal data »
Nikhil Ghosh · Spencer Frei · Wooseok Ha · Bin Yu -
2022 : Disentangling the Mechanisms Behind Implicit Regularization in SGD »
Zachary Novack · Simran Kaur · Tanya Marwah · Saurabh Garg · Zachary Lipton -
2022 : RLSBench: A Large-Scale Empirical Study of Domain Adaptation Under Relaxed Label Shift »
Saurabh Garg · Nick Erickson · James Sharpnack · Alexander Smola · Sivaraman Balakrishnan · Zachary Lipton -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Benign, Tempered, or Catastrophic: Toward a Refined Taxonomy of Overfitting »
Neil Mallinar · James Simon · Amirhesam Abedsoltan · Parthe Pandit · Misha Belkin · Preetum Nakkiran -
2022 Poster: Characterizing Datapoints via Second-Split Forgetting »
Pratyush Maini · Saurabh Garg · Zachary Lipton · J. Zico Kolter -
2022 Poster: Unsupervised Learning under Latent Label Shift »
Manley Roberts · Pranav Mani · Saurabh Garg · Zachary Lipton -
2022 Poster: Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit »
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang -
2022 Poster: Domain Adaptation under Open Set Label Shift »
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton -
2022 Poster: What You See is What You Get: Principled Deep Learning via Distributional Generalization »
Bogdan Kulynych · Yao-Yuan Yang · Yaodong Yu · Jarosław Błasiok · Preetum Nakkiran -
2022 Poster: Knowledge Distillation: Bad Models Can Be Good Role Models »
Gal Kaplun · Eran Malach · Preetum Nakkiran · Shai Shalev-Shwartz -
2021 Poster: Mixture Proportion Estimation and PU Learning:A Modern Approach »
Saurabh Garg · Yifan Wu · Alexander Smola · Sivaraman Balakrishnan · Zachary Lipton -
2021 Poster: Revisiting Model Stitching to Compare Neural Representations »
Yamini Bansal · Preetum Nakkiran · Boaz Barak -
2020 : Contributed talks in Session 3 (Zoom) »
Mark Schmidt · Zhan Gao · Wenjie Li · Preetum Nakkiran · Denny Wu · Chengrun Yang -
2020 : Contributed Video: Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems, Preetum Nakkiran »
Preetum Nakkiran -
2020 : Poster Session 2 (gather.town) »
Sharan Vaswani · Nicolas Loizou · Wenjie Li · Preetum Nakkiran · Zhan Gao · Sina Baghal · Jingfeng Wu · Roozbeh Yousefzadeh · Jinyi Wang · Jing Wang · Cong Xie · Anastasia Borovykh · Stanislaw Jastrzebski · Soham Dan · Yiliang Zhang · Mark Tuddenham · Sarath Pattathil · Ievgen Redko · Jeremy Cohen · Yasaman Esfandiari · Zhanhong Jiang · Mostafa ElAraby · Chulhee Yun · Michael Psenka · Robert Gower · Xiaoyu Wang -
2020 Poster: A Unified View of Label Shift Estimation »
Saurabh Garg · Yifan Wu · Sivaraman Balakrishnan · Zachary Lipton -
2019 Poster: SGD on Neural Networks Learns Functions of Increasing Complexity »
Dimitris Kalimeris · Gal Kaplun · Preetum Nakkiran · Benjamin Edelman · Tristan Yang · Boaz Barak · Haofeng Zhang -
2019 Spotlight: SGD on Neural Networks Learns Functions of Increasing Complexity »
Dimitris Kalimeris · Gal Kaplun · Preetum Nakkiran · Benjamin Edelman · Tristan Yang · Boaz Barak · Haofeng Zhang -
2019 Poster: (Nearly) Efficient Algorithms for the Graph Matching Problem on Correlated Random Graphs »
Boaz Barak · Chi-Ning Chou · Zhixian Lei · Tselil Schramm · Yueqi Sheng -
2019 Poster: Landmark Ordinal Embedding »
Nikhil Ghosh · Yuxin Chen · Yisong Yue