Timezone: »
Survival analysis models the distribution of time until an event of interest, such as discharge from the hospital or admission to the ICU. When a model’s predicted number of events within any time interval is similar to the observed number, it is called well-calibrated. A survival model’s calibration can be measured using, for instance, distributional calibration (D-CALIBRATION) [Haider et al., 2020] which computes the squared difference between the observed and predicted number of events within different time intervals. Classically, calibration is addressed in post-training analysis. We develop explicit calibration (X-CAL), which turns D-CALIBRATION into a differentiable objective that can be used in survival modeling alongside maximum likelihood estimation and other objectives. X-CAL allows us to directly optimize calibration and strike a desired trade-off between predictive power and calibration. In our experiments, we fit a variety of shallow and deep models on simulated data, a survival dataset based on MNIST, on length-of-stay prediction using MIMIC-III data, and on brain cancer data from The Cancer Genome Atlas. We show that the models we study can be miscalibrated. We give experimental evidence on these datasets that X-CAL improves D-CALIBRATION without a large decrease in concordance or likelihood.
Author Information
Mark Goldstein (New York University)
Xintian Han (New York University)
Aahlad Puli (NYU)
Adler Perotte (Columbia University)
Rajesh Ranganath (New York University)
More from the Same Authors
-
2021 Spotlight: Offline RL Without Off-Policy Evaluation »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2021 : Learning Invariant Representations with Missing Data »
Mark Goldstein · Adriel Saporta · Aahlad Puli · Rajesh Ranganath · Andrew Miller -
2021 : Learning to Accelerate MR Screenings »
Raghav Singhal · Mukund Sudarshan · Angela Tong · Daniel Sodickson · Rajesh Ranganath -
2021 : Individual treatment effect estimation in the presence of unobserved confounding based on a fixed relative treatment effect »
Wouter van Amsterdam · Rajesh Ranganath -
2021 : Quantile Filtered Imitation Learning »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2021 Poster: Inverse-Weighted Survival Games »
Xintian Han · Mark Goldstein · Aahlad Puli · Thomas Wies · Adler Perotte · Rajesh Ranganath -
2021 Poster: Offline RL Without Off-Policy Evaluation »
David Brandfonbrener · Will Whitney · Rajesh Ranganath · Joan Bruna -
2020 Poster: Deep Direct Likelihood Knockoffs »
Mukund Sudarshan · Wesley Tansey · Rajesh Ranganath -
2020 Poster: General Control Functions for Causal Effect Estimation from IVs »
Aahlad Puli · Rajesh Ranganath -
2020 Poster: Causal Estimation with Functional Confounders »
Aahlad Puli · Adler Perotte · Rajesh Ranganath -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 Poster: Energy-Inspired Models: Learning with Sampler-Induced Distributions »
John Lawson · George Tucker · Bo Dai · Rajesh Ranganath -
2018 Poster: Removing Hidden Confounding by Experimental Grounding »
Nathan Kallus · Aahlad Puli · Uri Shalit -
2018 Spotlight: Removing Hidden Confounding by Experimental Grounding »
Nathan Kallus · Aahlad Puli · Uri Shalit