Skip to yearly menu bar Skip to main content


Poster

Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections

Raanan Rohekar · Yaniv Gurwicz · Shami Nisimov · Gal Novik

East Exhibition Hall B + C #45

Keywords: [ Algorithms -> Boosting and Ensemble Methods; Algorithms ] [ Model Selection and Structure Learning ] [ Algorithms ] [ Uncertainty Estimation ]


Abstract:

Modeling uncertainty in deep neural networks, despite recent important advances, is still an open problem. Bayesian neural networks are a powerful solution, where the prior over network weights is a design choice, often a normal distribution or other distribution encouraging sparsity. However, this prior is agnostic to the generative process of the input data, which might lead to unwarranted generalization for out-of-distribution tested data. We suggest the presence of a confounder for the relation between the input data and the discriminative function given the target label. We propose an approach for modeling this confounder by sharing neural connectivity patterns between the generative and discriminative networks. This approach leads to a new deep architecture, where networks are sampled from the posterior of local causal structures, and coupled into a compact hierarchy. We demonstrate that sampling networks from this hierarchy, proportionally to their posterior, is efficient and enables estimating various types of uncertainties. Empirical evaluations of our method demonstrate significant improvement compared to state-of-the-art calibration and out-of-distribution detection methods.

Live content is unavailable. Log in and register to view live content