Timezone: »

 
Poster
GCN meets GPU: Decoupling “When to Sample” from “How to Sample”
Morteza Ramezani · Weilin Cong · Mehrdad Mahdavi · Anand Sivasubramaniam · Mahmut Kandemir

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #750

Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs). While effective in alleviating the neighborhood explosion, due to bandwidth and memory bottlenecks, these methods lead to computational overheads in preprocessing and loading new samples in heterogeneous systems, which significantly deteriorate the sampling performance. By decoupling the frequency of sampling from the sampling strategy, we propose LazyGCN, a general yet effective framework that can be integrated with any sampling strategy to substantially improve the training time. The basic idea behind LazyGCN is to perform sampling periodically and effectively recycle the sampled nodes to mitigate data preparation overhead. We theoretically analyze the proposed algorithm and show that under a mild condition on the recycling size, by reducing the variance of inner layers, we are able to obtain the same convergence rate as the underlying sampling method. We also give corroborating empirical evidence on large real-world graphs, demonstrating that the proposed schema can significantly reduce the number of sampling steps and yield superior speedup without compromising the accuracy.

Author Information

Morteza Ramezani (Pennsylvania State University)
Weilin Cong (Pennsylvania State University)
Mehrdad Mahdavi (Pennsylvania State University)

Mehrdad Mahdavi is an Assistant Professor of Computer Science & Engineering at Pennsylvania State University. He runs the Machine Learning and Optimization Lab, where they work on fundamental problems in computational and theoretical machine learning.

Anand Sivasubramaniam (Penn State)
Mahmut Kandemir (Pennsylvania State University)

More from the Same Authors