Workshop: AI for Science: Progress and Promises

Interpretable Geometric Deep Learning via Learnable Randomness Injection

Siqi Miao · Yunan Luo · Mia Liu · Pan Li

Keywords: [ interpretability ] [ Geometric Deep Learning ] [ graph neural networks ]


Point cloud data is ubiquitous in scientific fields. Recently, geometric deep learning (GDL) has been widely applied to solve prediction tasks with such data. However, GDL models are often complicated and hardly interpretable, which poses concerns to scientists when deploying these models in scientific analysis and experiments. This work proposes a general mechanism named learnable randomness injection (LRI), which allows building inherently interpretable models based on general GDL backbones. Once being trained, LRI-induced models can detect the points in the point cloud data that carry information indicative of the prediction label. Such indicative information may be reflected by either the existence of these points in the data or the geometric locations of these points. We also propose four datasets from real scientific applications in the domains of high energy physics and biochemistry to evaluate LRI. Compared with previous post-hoc interpretation methods, the points detected by LRI align much better and stabler with the ground-truth patterns that have actual scientific meanings. LRI-induced models are also more robust to the distribution shifts between training and test scenarios.

Chat is not available.