Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning

Condensing Graphs via One-Step Gradient Matching

Wei Jin · Xianfeng Tang · Haoming Jiang · Zheng Li · Danqing Zhang · Jiliang Tang · Bing Yin


Abstract: As training deep learning models on large dataset takes a lot of time and resources, it is desired to construct a small synthetic dataset with which we can train deep learning models sufficiently. There are recent works that have explored solutions on condensing image datasets through complex bi-level optimization. For instance, dataset condensation (DC) matches network gradients w.r.t. large-real data and small-synthetic data, where the network weights are optimized for multiple steps at each outer iteration. However, existing approaches have their inherent limitations: (1) they are not directly applicable to graphs where the data is discrete; and (2) the condensation process is computationally expensive due to the involved nested optimization. To bridge the gap, we investigate efficient dataset condensation tailored for graph datasets where we model the discrete graph structure as a probabilistic model. We further propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights. Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs. Extensive experiments on various graph datasets demonstrate the effectiveness and efficiency of the proposed method. In particular, we are able to reduce the dataset size by $90$\% while approximating up to $98$\% of the original performance and our method is significantly faster than multi-step gradient matching (e.g. $15$× in CIFAR10 for synthesizing $500$ graphs).

Chat is not available.