Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning (GLFrontiers)

Shedding Light on Random Dropping and Oversmoothing

Han Xuanyuan · Tianxiang Zhao · Dongsheng Luo

Keywords: [ oversmoothing ]


Abstract:

Graph Neural Networks (GNNs) are widespread in graph representation learning. Random dropping approaches, notably DropEdge and DropMessage, claim to alleviate the key issues of overfitting and oversmoothing by randomly removing elements of the graph representation. However, their effectiveness is largely unverified. In this work, we find empirically that they have a limited effect in reducing oversmoothing, contrary to what is typically assumed in the literature. These approaches are also non-parametric and motivate us to question if learned dropping can alleviate the propagation of redundant or noisy edges. We propose a new information-theoretic approach, in which we learn to perform dropping on the data exchanged by nodes during message passing via optimizing an information bottleneck. Our approach is superior to previous dropping methods in oversmoothing reduction and has promising performance in the case of deep GNNs.

Chat is not available.