Skip to yearly menu bar Skip to main content


Poster

Neural Interaction Transparency (NIT): Disentangling Learned Interactions for Improved Interpretability

Michael Tsang · Hanpeng Liu · Sanjay Purushotham · Pavankumar Murali · Yan Liu

Room 210 #83

Keywords: [ Visualization or Exposition Techniques for Deep Networks ]


Abstract: Neural networks are known to model statistical interactions, but they entangle the interactions at intermediate hidden layers for shared representation learning. We propose a framework, Neural Interaction Transparency (NIT), that disentangles the shared learning across different interactions to obtain their intrinsic lower-order and interpretable structure. This is done through a novel regularizer that directly penalizes interaction order. We show that disentangling interactions reduces a feedforward neural network to a generalized additive model with interactions, which can lead to transparent models that perform comparably to the state-of-the-art models. NIT is also flexible and efficient; it can learn generalized additive models with maximum $K$-order interactions by training only $O(1)$ models.

Live content is unavailable. Log in and register to view live content