Skip to yearly menu bar Skip to main content


Poster

Learning to Shape In-distribution Feature Space for Out-of-distribution Detection

Yonggang Zhang · Bo Peng · Jie Lu · Zhen Fang · Yiu-ming Cheung

West Ballroom A-D #6810
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Out-of-distribution (OOD) detection is critical for deploying machine learning models in the open world. To design scoring functions that discern OOD data from the in-distribution (ID) cases from a pre-trained discriminative model, existing methods tend to make rigorous distributional assumptions either explicitly or implicitly due to the lack of knowledge about the learned feature space in advance. This dilemma directly motivates us to raise a fundamental yet under-explored question: \textit{Is it possible to deterministically model the feature distribution while pre-training a discriminative model?} This paper gives an affirmative answer to this question by presenting a Distributional Representation Learning (DRL) framework for OOD detection. In particular, DRL explicitly enforces the underlying feature space to conform to a pre-defined mixture distribution, together with an online approximation of normalization constants to enable end-to-end training. Furthermore, we formulate DRL into a provably convergent Expectation-Maximization algorithm to avoid trivial solutions and rearrange the sequential sampling to guide the training consistency. Extensive evaluations across mainstream OOD detection benchmarks empirically manifest the superiority of DRL over its advanced counterparts.

Live content is unavailable. Log in and register to view live content