Timezone: »
An important component in deploying machine learning (ML) in safety-critic applications is having a reliable measure of confidence in the ML model's predictions. For a classifier $f$ producing a probability vector $f(x)$ over the candidate classes, the confidence is typically taken to be $\max_i f(x)_i$. This approach is potentially limited, as it disregards the rest of the probability vector. In this work, we derive several confidence measures that depend on information beyond the maximum score, such as margin-based and entropy-based measures, and empirically evaluate their usefulness, focusing on NLP tasks with distribution shifts and Transformer-based models. We show that when models are evaluated on the out-of-distribution data ``out of the box'', using only the maximum score to inform the confidence measure is highly suboptimal. In the post-processing regime (where the scores of $f$ can be improved using additional in-distribution held-out data), this remains true, albeit less significant. Overall, our results suggest that entropy-based confidence is a surprisingly useful measure.
Author Information
Gal Yona (Weizmann Institute of Science)
Amir Feder (Columbia University)
Amir Feder is a Postdoctoral Research Scientist in the Data Science Institute, working with Professor David Blei on causal inference and natural language processing. His research seeks to develop methods that integrate causality into natural language processing, and use them to build linguistically-informed algorithms for predicting and understanding human behavior. Through the paradigm of causal machine learning, Amir aims to build bridges between machine learning and the social sciences. Before joining Columbia, Amir received his PhD from the Technion, where he was advised by Roi Reichart and worked closely with Uri Shalit. In a previous (academic) life, Amir was an economics, statistics and history student at Tel Aviv University, the Hebrew University of Jerusalem and Northwestern University. Amir was the organizer of the First Workshop on Causal Inference and NLP (CI+NLP) at EMNLP 2021.
Itay Laish
More from the Same Authors
-
2021 : Revisiting Sanity Checks for Saliency Maps »
Gal Yona -
2022 : An Invariant Learning Characterization of Controlled Text Generation »
Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei -
2022 : Malign Overfitting: Interpolation and Invariance are Fundamentally at Odds »
Yoav Wald · Gal Yona · Uri Shalit · Yair Carmon -
2022 : An Invariant Learning Characterization of Controlled Text Generation »
Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei -
2022 : An Invariant Learning Characterization of Controlled Text Generation »
Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei -
2022 Poster: CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior »
Eldar D Abraham · Karel D'Oosterlinck · Amir Feder · Yair Gat · Atticus Geiger · Christopher Potts · Roi Reichart · Zhengxuan Wu -
2022 Poster: In the Eye of the Beholder: Robust Prediction with Causal User Modeling »
Amir Feder · Guy Horowitz · Yoav Wald · Roi Reichart · Nir Rosenfeld -
2021 : [S13] Revisiting Sanity Checks for Saliency Maps »
Gal Yona -
2021 Poster: On Calibration and Out-of-Domain Generalization »
Yoav Wald · Amir Feder · Daniel Greenfeld · Uri Shalit -
2019 : Coffee Break and Poster Session »
Rameswar Panda · Prasanna Sattigeri · Kush Varshney · Karthikeyan Natesan Ramamurthy · Harvineet Singh · Vishwali Mhasawade · Shalmali Joshi · Laleh Seyyed-Kalantari · Matthew McDermott · Gal Yona · James Atwood · Hansa Srinivasan · Yonatan Halpern · D. Sculley · Behrouz Babaki · Margarida Carvalho · Josie Williams · Narges Razavian · Haoran Zhang · Amy Lu · Irene Y Chen · Xiaojie Mao · Angela Zhou · Nathan Kallus