Skip to yearly menu bar Skip to main content


Poster

Denoising Diffusion Path: Attribution Noise Reduction with An Auxiliary Diffusion Model

Yiming Lei · Zilong Li · Junping Zhang · Hongming Shan

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The explainability of deep neural networks (DNNs) is critical for trust and reliability in AI systems. Path-based attribution methods, such as Integrated Gradients (IG), aim to explain predictions by accumulating gradients along a path from a baseline to the target image. However, noise accumulated during this process can significantly distort the explanation. Existing methods primarily focus on finding alternative paths to bypass noise while overlooking a crucial factor that intermediate steps often deviate from the training data distribution, further amplifying noise. This work presents a novel Denoising Diffusion Path (DDPath) to tackle this challenge by harnessing the power of diffusion models for denoising. By leveraging the inherent ability of diffusion models to progressively remove noise from an image, DDPath constructs a piece-wise linear path where each segment guarantees samples drawn from a Gaussian distribution centered at the target image; this also ensures the gradual disappearance of noise along the path towards cleaner and more interpretable attributions. We further demonstrate that DDPath adheres to essential axiomatic properties for attribution methods and can be seamlessly integrated with existing methods like IG. Extensive experimental results demonstrate that DDPath significantly reduces noise in the attributions---resulting in clearer explanations---and achieves better quantitative results compared to traditional path-based methods.

Live content is unavailable. Log in and register to view live content