Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Information-Theoretic Principles in Cognitive Systems (InfoCog)

Finding Relevant Information in Saliency Related Neural Networks

Ron M. Hecht · Gershon Celniker · Ronit Bustin · Dan Levi · Ariel Telpaz · Omer Tsimhoni · Ke Liu

[ ] [ Project Page ]
Fri 15 Dec 12:40 p.m. PST — 1:30 p.m. PST

Abstract:

Over the last few years, many saliency models have shifted to using Deep Learning (DL). DL models can be viewed in this context as a double-edged sword. On the one hand, they boost estimation performance but at the same time have less explanatory power than more explicit models. This drop in explanatory power is why DL models are often dubbed implicit models. Explainable AI (XAI) techniques have been formulated to address this shortfall. They work by extracting information from the network and explaining it. Here, we demonstrate the effectiveness of the Relevant Information Approach in accounting for saliency networks. We apply this approach to saliency models based on explicit algorithms when represented as neural networks. These networks are known to contain relevant information in their neurons. We estimate the relevant information of each neuron by capturing the relevant information with respect to first layer features (intensity, red, blue) and its higher-level manipulations. We measure relevant information by using Mutual Information (MI) between quantified features and the label. These experiments were conducted on subset of the CAT2000 dataset.

Chat is not available.