Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Temporal Graph Learning Workshop @ NeurIPS 2023

Leveraging Temporal Graph Networks Using Module Decoupling

Or Feldman · Chaim Baskin


Abstract:

Modern approaches for learning on dynamic graphs have adopted the use ofbatches instead of applying updates one by one. The use of batches allows thesetechniques to become helpful in streaming scenarios where updates to graphs arereceived at extreme speeds. Using batches, however, forces the models to updateinfrequently, which results in the degradation of their performance. In this work,we suggest a decoupling strategy that enables the models to update frequentlywhile using batches. By decoupling the core modules of temporal graph networksand implementing them using a minimal number of learnable parameters, we havedeveloped the Lightweight Decoupled Temporal Graph Network (LDTGN), an exceptionally efficient model for learning on dynamic graphs. LDTG was validatedon various dynamic graph benchmarks, providing comparable or state-of-the-artresults with significantly higher throughput than previous art. Notably, our methodoutperforms previous approaches by more than 20% on benchmarks that requirerapid model update rates, such as USLegis or UNTrade. The code to reproduceour experiments is available at \href{https://github.com/TPFI22/MODULES-DECOUPLING}{this http url}.

Chat is not available.