Timezone: »

Implicit SVD for Graph Representation Learning
Sami Abu-El-Haija · Hesham Mostafa · Marcel Nassar · Valentino Crespi · Greg Ver Steeg · Aram Galstyan

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None
Recent improvements in the performance of state-of-the-art (SOTA) methods for Graph Representational Learning (GRL) have come at the cost of significant computational resource requirements for training, e.g., for calculating gradients via backprop over many data epochs. Meanwhile, Singular Value Decomposition (SVD) can find closed-form solutions to convex problems, using merely a handful of epochs. In this paper, we make GRL more computationally tractable for those with modest hardware. We design a framework that computes SVD of *implicitly* defined matrices, and apply this framework to several GRL tasks. For each task, we derive first-order approximation of a SOTA model, where we design (expensive-to-store) matrix $\mathbf{M}$ and train the model, in closed-form, via SVD of $\mathbf{M}$, without calculating entries of $\mathbf{M}$. By converging to a unique point in one step, and without calculating gradients, our models show competitive empirical test performance over various graphs such as article citation and biological interaction networks. More importantly, SVD can initialize a deeper model, that is architected to be non-linear almost everywhere, though behaves linearly when its parameters reside on a hyperplane, onto which SVD initializes. The deeper model can then be fine-tuned within only a few epochs. Overall, our algorithm trains hundreds of times faster than state-of-the-art methods, while competing on test empirical performance. We open-source our implementation at: https://github.com/samihaija/isvd

Author Information

Sami Abu-El-Haija (USC Information Sciences Institute)
Hesham Mostafa (Intel Corporation)
Marcel Nassar (Intel)
Valentino Crespi (Information Sciences Institute - USC)
Greg Ver Steeg (USC Information Sciences Institute)
Aram Galstyan (USC Information Sciences Institute)

More from the Same Authors