Timezone: »
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance in enhancing the safe deployment of neural networks. One of the primary challenges is that models often produce highly confident predictions on OOD data, which undermines the driving principle in OOD detection that the model should only be confident about in-distribution samples. In this work, we propose ReAct—a simple and effective technique for reducing model overconfidence on OOD data. Our method is motivated by novel analysis on internal activations of neural networks, which displays highly distinctive signature patterns for OOD distributions. Our method can generalize effectively to different network architectures and different OOD detection scores. We empirically demonstrate that ReAct achieves competitive detection performance on a comprehensive suite of benchmark datasets, and give theoretical explication for our method’s efficacy. On the ImageNet benchmark, ReAct reduces the false positive rate (FPR95) by 25.05% compared to the previous best method.
Author Information
Yiyou Sun (University of Wisconsin, Madison)
Chuan Guo (Facebook AI Research)
Yixuan Li (University of Wisconsin-Madison)
More from the Same Authors
-
2022 Poster: Delving into Out-of-Distribution Detection with Vision-Language Representations »
Yifei Ming · Ziyang Cai · Jiuxiang Gu · Yiyou Sun · Wei Li · Yixuan Li -
2022 Poster: OpenOOD: Benchmarking Generalized Out-of-Distribution Detection »
Jingkang Yang · Pengyun Wang · Dejian Zou · Zitang Zhou · Kunyuan Ding · WENXUAN PENG · Haoqi Wang · Guangyao Chen · Bo Li · Yiyou Sun · Xuefeng Du · Kaiyang Zhou · Wayne Zhang · Dan Hendrycks · Yixuan Li · Ziwei Liu -
2021 Poster: Online Adaptation to Label Distribution Shift »
Ruihan Wu · Chuan Guo · Yi Su · Kilian Weinberger -
2021 Poster: On the Importance of Gradients for Detecting Distributional Shifts in the Wild »
Rui Huang · Andrew Geng · Yixuan Li -
2021 Poster: Fixes That Fail: Self-Defeating Improvements in Machine-Learning Systems »
Ruihan Wu · Chuan Guo · Awni Hannun · Laurens van der Maaten -
2021 Poster: Can multi-label classification networks know what they don’t know? »
Haoran Wang · Weitang Liu · Alex Bocchieri · Yixuan Li -
2021 Poster: BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining »
Weizhe Hua · Yichi Zhang · Chuan Guo · Zhiru Zhang · G. Edward Suh -
2020 Poster: Energy-based Out-of-distribution Detection »
Weitang Liu · Xiaoyun Wang · John Owens · Yixuan Li -
2019 Poster: Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces »
Chuan Guo · Ali Mousavi · Xiang Wu · Daniel Holtmann-Rice · Satyen Kale · Sashank Reddi · Sanjiv Kumar -
2019 Poster: A New Defense Against Adversarial Images: Turning a Weakness into a Strength »
Shengyuan Hu · Tao Yu · Chuan Guo · Wei-Lun Chao · Kilian Weinberger -
2016 Poster: Supervised Word Mover's Distance »
Gao Huang · Chuan Guo · Matt J Kusner · Yu Sun · Fei Sha · Kilian Weinberger -
2016 Oral: Supervised Word Mover's Distance »
Gao Huang · Chuan Guo · Matt J Kusner · Yu Sun · Fei Sha · Kilian Weinberger