This workshop bridges the conversation among different areas such as temporal knowledge graph learning, graph anomaly detection, and graph representation learning. It aims to share understanding and techniques to facilitate the development of novel temporal graph learning methods. It also brings together researchers from both academia and industry and connects researchers from various fields aiming to span theories, methodologies, and applications.
Sat 5:30 a.m. - 6:00 a.m.
|
Opening Remark
SlidesLive Video » |
Kellin Pelrine 🔗 |
Sat 6:00 a.m. - 6:15 a.m.
|
Spotlight: Time-Evolving Conditional Character-centric Graphs for Movie Understanding
(
Spotlight Talk
)
SlidesLive Video » Temporal graph structure learning for long-term human-centric video understanding is promising but remains challenging due to the scarcity of dense graph annotations for long videos. It is the desired capability to learn the dynamic spatio-temporal interactions of human actors and other objects implicitly from visual information itself. Toward this goal, we present a novel Time-Evolving Conditional cHaracter-centric graph (TECH) for long-term human-centric video understanding with application in Movie QA. TECH is inherently a recurrent system of the query-conditioned dynamic graph that evolves over time along the story and follows throughout the course of a movie clip. As aiming toward human-centric video understanding, TECH uses a two-stage feature refinement process to draw attention to human characters and their interactions while treating the interactions with non-human objects as contextual information. Tested on the large-scale TVQA dataset, TECH clearly shows advantages over recent state-of-the-art models. |
Long Dang · Thao Le · Vuong Le · Tu Minh Phuong · Truyen Tran 🔗 |
Sat 6:15 a.m. - 6:30 a.m.
|
Spotlight: Scalable Spatiotemporal Graph Neural Networks
(
Spotlight Talk
)
link »
SlidesLive Video » Neural forecasting of spatiotemporal time series drives both research and industrial innovation in several relevant application domains. Graph neural networks (GNNs) are often the core component of the forecasting architecture. However, in most spatiotemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph, hence hindering the application of these models to large graphs and long temporal sequences. While methods to improve scalability have been proposed in the context of static graphs, few research efforts have been devoted to the spatiotemporal case. To fill this gap, we propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. In particular, we use a randomized recurrent neural network to embed the history of the input time series into high-dimensional state representations encompassing multi-scale temporal dynamics. Such representations are then propagated along the spatial dimension using different powers of the graph adjacency matrix to generate node embeddings characterized by a rich pool of spatiotemporal features. The resulting node embeddings can be efficiently pre-computed in an unsupervised manner, before being fed to a feed-forward decoder that learns to map the multi-scale spatiotemporal representations to predictions. The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks. Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden. |
Andrea Cini · Ivan Marisca · Filippo Maria Bianchi · Cesare Alippi 🔗 |
Sat 6:30 a.m. - 6:45 a.m.
|
Spotlight: Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models
(
Spotlight Talk
)
link »
SlidesLive Video » Real-world graphs such as social networks, communication networks, and rating networks are constantly evolving over time. Many architectures have been de,veloped to learn effective node representations using both graph structure and its dynamics. While the robustness of static graph models is well-studied, the vulnerability of the dynamic graph models to adversarial attacks is underexplored. In this work, we design a novel attack on discrete-time dynamic graph models where we desire to perturb the input graph sequence in a manner that preserves the structural evolution of the graph. To this end, we motivate a novel Time-Aware Perturbation (TAP) constraint, which ensures that perturbations introduced at each time step are restricted to only a small fraction of the number of changes in the graph since the previous time step. We present a theoretically-grounded Projected Gradient Descent approach for dynamic graphs to find the effective perturbations under the TAP constraint. Experiments on dynamic link prediction show that our approach is up to 4x more effective than the baseline methods for attacking these models under the novel constraint. Dyn-PGD also outperforms the existing baselines on the node classification task by up to 6x and is 2x more efficient in running time than the best-performing baseline |
Kartik Sharma · Rakshit Trivedi · Rohit Sridhar · Srijan Kumar 🔗 |
Sat 6:45 a.m. - 7:00 a.m.
|
Spotlight: Anomaly Detection in Multiplex Dynamic Networks: from Blockchain Security to Brain Disease Prediction
(
Spotlight Talk
)
link »
SlidesLive Video » The problem of identifying anomalies in dynamic networks is a fundamental task with a wide range of applications. However, it raises critical challenges due to the complex nature of anomalies, lack of ground truth knowledge, and complex and dynamic interactions in the network. Most existing approaches usually study networks with a single type of connection between vertices, while in many applications interactions between objects vary, yielding multiplex networks. We propose ANOMULY, a general, unsupervised edge anomaly detection framework for multiplex dynamic networks. In each relation type, ANOMULY sees node embeddings at different GNN layers as hierarchical node states and employs a GRU cell to capture temporal properties of the network and update node embeddings over time. We then add an attention mechanism that incorporates information across different types of relations. Our case study on brain networks shows how this approach could be employed as a new tool to understand abnormal brain activity that might reveal a brain disease or disorder. Extensive experiments on nine real-world datasets demonstrate that ANOMULY achieves state-of-the-art performance. |
Ali Behrouz · Margo Seltzer 🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
KeyNote 1 by S. Mehran Kazemi : Learning and Reasoning with Temporal Knowledge Graphs
(
KeyNote
)
SlidesLive Video » |
Seyed Mehran Kazemi 🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
KeyNote 2 by Bryan Hooi : Temporal Graph Learning: Some Challenges and Recent Directions
(
KeyNote
)
SlidesLive Video » |
Bryan Hooi 🔗 |
Sat 8:00 a.m. - 8:30 a.m.
|
Coffee Break
|
🔗 |
Sat 8:30 a.m. - 10:00 a.m.
|
Poster Session
|
🔗 |
Sat 10:00 a.m. - 11:30 a.m.
|
Lunch Break
|
🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
KeyNote 3 by Vikas Garg: Provably Powerful Temporal Graph Networks
(
KeyNote
)
SlidesLive Video » |
Vikas Garg 🔗 |
Sat 12:00 p.m. - 12:30 p.m.
|
KeyNote 4 by Srijan Kumar: Temporal GNNs for Web Safety and Integrity
(
KeyNote
)
SlidesLive Video » |
Srijan Kumar 🔗 |
Sat 12:30 p.m. - 1:00 p.m.
|
KeyNote 5 by Pan Li: Representation learning for predicting temporal network evolution
(
KeyNote
)
SlidesLive Video » Representation learning for temporal networks is challenging, first due to the entanglement of irregular network structures and temporal information, second due to the large-scale essence of real-world temporal networks. Most of previous works for this task have extended graph neural networks to the temporal setting. However, we argue that this type of methods may not well capture some crucial structural features, such as triadic closures, which are crucial for predicting how temporal network evolves over time. In the talk, we will first introduce a strategy to address the problem, named causal anonymous walk that performs online structural feature construction based on sampling walks. Then, we will introduce a very recent work that adopts neighborhood-aware representations to substantially accelerate the above sampling-based approach to make the idea applied to massive networks. The papers to be introduced are: Inductive representation learning in temporal networks via causal anonymous walks, Wang et al. ICLR 2021 Neighborhood-aware Scalable Temporal Network Representation Learning, Luo & Li, LoG 2022 (selected for oral presentation) |
Pan Li 🔗 |
Sat 1:00 p.m. - 1:30 p.m.
|
Coffee Break
|
🔗 |
Sat 1:30 p.m. - 2:30 p.m.
|
Panel
SlidesLive Video » |
Vikas Garg · Pan Li · Srijan Kumar · Emanuele Rossi · Shenyang Huang 🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Closing Remark
|
Reihaneh Rabbany 🔗 |
-
|
Approximate Bayesian Computation for Panel Data with Signature Maximum Mean Discrepancies
(
Poster
)
link »
SlidesLive Video » Simulation models are becoming a staple tool across application domains from economics to biology. When such models are stochastic, evaluating their likelihood functions in a reasonable time is typically infeasible or even impossible. In these settings, simulation-based inference procedures are a convenient means to approximating conventional parameter calibration procedures. A popular example is approximate Bayesian computation, in which the observed data is compared to the simulation output at different parameter values through some distance function. While many such methods exist, few are compatible with panel data of various kinds, as might appear in medical settings, for example; many methods instead assume iid observations in both the simulated and observed data. We seek to address this gap through the use of signature maximum mean discrepancies as distance measures in approximate Bayesian computation. Through experiments with a dynamical model of functional brain networks, we demonstrate that such an approach can flexibly operate on panel data of various kinds, for example dynamic graph data arising from multiple patients/subjects in fMRI settings. |
Joel Dyer · John Fitzgerald · Bastian Rieck · Sebastian Schmon 🔗 |
-
|
Anomaly Detection in Multiplex Dynamic Networks: from Blockchain Security to Brain Disease Prediction
(
Poster
)
link »
SlidesLive Video » The problem of identifying anomalies in dynamic networks is a fundamental task with a wide range of applications. However, it raises critical challenges due to the complex nature of anomalies, lack of ground truth knowledge, and complex and dynamic interactions in the network. Most existing approaches usually study networks with a single type of connection between vertices, while in many applications interactions between objects vary, yielding multiplex networks. We propose ANOMULY, a general, unsupervised edge anomaly detection framework for multiplex dynamic networks. In each relation type, ANOMULY sees node embeddings at different GNN layers as hierarchical node states and employs a GRU cell to capture temporal properties of the network and update node embeddings over time. We then add an attention mechanism that incorporates information across different types of relations. Our case study on brain networks shows how this approach could be employed as a new tool to understand abnormal brain activity that might reveal a brain disease or disorder. Extensive experiments on nine real-world datasets demonstrate that ANOMULY achieves state-of-the-art performance. |
Ali Behrouz · Margo Seltzer 🔗 |
-
|
TTERGM: Social Theory-Driven network simula
(
Poster
)
link »
SlidesLive Video » Temporal exponential random graph models (TERGM) are powerful statistical1models that can be used to infer the temporal pattern of edge formation and2elimination in complex networks (e.g., social networks). TERGMs can also be used3in a generative capacity to predict longitudinal time series data in these evolving4graphs. However, parameter estimation within this framework fails to capture5many real-world properties of social networks, including: triadic relationships,6small world characteristics, and social learning theories which could be used to7constrain the probabilistic estimation of dyadic covariates. Here, we propose triadic8temporal exponential random graph models (TTERGM) to fill this void, which9includes these hierarchical network relationships within the graph model. We10represent social network learning theory as an additional probability distribution11that optimizes Gibbs entropy in the graph vector space. The new parameters are12then approximated via Markov chain Monte Carlo maximum likelihood estimation.13We show that our TTERGM model achieves improved fidelity and more accurate14predictions compared to several benchmark |
Yifan Huang · Clayton Barham · Eric Page · Pamela K Douglas 🔗 |
-
|
Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models
(
Poster
)
link »
SlidesLive Video » Real-world graphs such as social networks, communication networks, and rating networks are constantly evolving over time. Many architectures have been de,veloped to learn effective node representations using both graph structure and its dynamics. While the robustness of static graph models is well-studied, the vulnerability of the dynamic graph models to adversarial attacks is underexplored. In this work, we design a novel attack on discrete-time dynamic graph models where we desire to perturb the input graph sequence in a manner that preserves the structural evolution of the graph. To this end, we motivate a novel Time-Aware Perturbation (TAP) constraint, which ensures that perturbations introduced at each time step are restricted to only a small fraction of the number of changes in the graph since the previous time step. We present a theoretically-grounded Projected Gradient Descent approach for dynamic graphs to find the effective perturbations under the TAP constraint. Experiments on dynamic link prediction show that our approach is up to 4x more effective than the baseline methods for attacking these models under the novel constraint. Dyn-PGD also outperforms the existing baselines on the node classification task by up to 6x and is 2x more efficient in running time than the best-performing baseline |
Kartik Sharma · Rakshit Trivedi · Rohit Sridhar · Srijan Kumar 🔗 |
-
|
Polarization identification on multiple timescale using representation learning on temporal graphs in Eulerian description
(
Poster
)
link »
SlidesLive Video » Social media is often described as both reflecting and distorting real-life debates. Indeed, social division occurs not only offline but also online on various political topics or scientific controversies. Several studies propose tools to identify and quantify online controversies through stance detection or polarization measures. While polarization is typically studied as a "snapshot" in time of a social network, we consider it as a temporal process. Moreover, the recent evolution in temporal graph representation learning provides new tools to directly combine time, content and graph topology. Current techniques that characterize polarization are beginning to use these tools, typically using a Lagrangian description which focuses on user trajectories. In this article, we make a case for approaching these problems with a Eulerian description, the concurrent description in fluid mechanics. In this description, the temporal evolution of nodes embeddings is represented with a deformation of velocity vector fields. Finally, we validate our method on a retweet graph from the last French presidential election campaign. |
Victor Chomel · Nathanaël Cuvelle--Magar · Maziyar Panahi · David Chavalarias 🔗 |
-
|
Scalable Spatiotemporal Graph Neural Networks
(
Poster
)
link »
SlidesLive Video » Neural forecasting of spatiotemporal time series drives both research and industrial innovation in several relevant application domains. Graph neural networks (GNNs) are often the core component of the forecasting architecture. However, in most spatiotemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph, hence hindering the application of these models to large graphs and long temporal sequences. While methods to improve scalability have been proposed in the context of static graphs, few research efforts have been devoted to the spatiotemporal case. To fill this gap, we propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. In particular, we use a randomized recurrent neural network to embed the history of the input time series into high-dimensional state representations encompassing multi-scale temporal dynamics. Such representations are then propagated along the spatial dimension using different powers of the graph adjacency matrix to generate node embeddings characterized by a rich pool of spatiotemporal features. The resulting node embeddings can be efficiently pre-computed in an unsupervised manner, before being fed to a feed-forward decoder that learns to map the multi-scale spatiotemporal representations to predictions. The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks. Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden. |
Andrea Cini · Ivan Marisca · Filippo Maria Bianchi · Cesare Alippi 🔗 |
-
|
On the Evaluation of Methods for Temporal Knowledge Graph Forecasting
(
Poster
)
link »
SlidesLive Video » Due to its ability to incorporate and leverage time information in relational data, Temporal Knowledge Graph (TKG) learning has become an increasingly studied research field. With the goal of predicting the future, researchers have presented innovative methods for what is called Temporal Knowledge Graph Forecasting. However, the experimental procedures in this line of work show inconsistencies that strongly influence empirical results and thus lead to distorted comparisons among models. This work focuses on the evaluation of TKG Forecasting models: we describe evaluation settings commonly used in this research area and shed light on its scholarship issues. Further, we provide a unified evaluation protocol and carry out a re-evaluation of state-of-the-art models on the most common datasets under such a setting. Finally, we show the difference in results caused by different evaluation settings. We believe that this work provides a solid foundation for future evaluations of TKG Forecasting models and can thus contribute to the development of this growing research area. |
Julia Gastinger · Timo Sztyler · Anett Schuelke · Lokesh Sharma 🔗 |
-
|
Learning Dynamic Graph Embeddings Using Random Walk With Temporal Backtracking
(
Poster
)
link »
SlidesLive Video » Representation learning on graphs (also referred to as network embedding) can be done at different levels of granularity, from node to graph level. The majority of work on graph representation learning focuses on the former, and while there has been some work done on graph-level embedding, these typically deal with static networks. However, learning low-dimensional graph-level representations for dynamic (i.e., temporal) networks is important for such downstream graph retrieval tasks as temporal graph similarity ranking, temporal graph isomorphism, and anomaly detection. In this paper, we propose a novel temporal graph-level embedding method to fill this gap. Our method first builds a multilayer graph and then utilizes a novel modified random walk with temporal backtracking to generate temporal contexts for the nodes in the graph. Finally, a ``document-level'' language model is learned from these contexts to generate graph-level embeddings. We evaluate our model on five publicly available datasets for two commonly used tasks of graph similarity ranking and anomaly detection. Our results show that our method achieves state-of-the-art performance compared to all prior baselines. |
Chenghan Huang · Lili Wang · Xinyuan Cao · Weicheng Ma · Soroush Vosoughi 🔗 |
-
|
Influencer Detection with Dynamic Graph Neural Networks
(
Poster
)
link »
SlidesLive Video » Leveraging network information for prediction tasks has become a common practice in many domains. Being an important part of targeted marketing, influencer detection can potentially benefit from incorporating dynamic network representation. In this work, we contribute to the literature by investigating different dynamic Graph Neural Networks (GNNs) configurations for influencer detection and evaluate their prediction performance using a unique corporate data set. We show that encoding temporal attributes and having a history of node dynamics prior to making predictions significantly impact performance. Furthermore, our empirical evaluation illustrates that network centrality measures are more beneficial than capturing neighborhood representation. |
Elena Tiukhova · Emiliano Penaloza · María Óskarsdóttir · Hernan Garcia · Alejandro Bahnsen · Bart Baesens · Monique Snoeck · Cristián Bravo 🔗 |
-
|
Time-Evolving Conditional Character-centric Graphs for Movie Understanding
(
Poster
)
link »
SlidesLive Video » Temporal graph structure learning for long-term human-centric video understanding is promising but remains challenging due to the scarcity of dense graph annotations for long videos. It is the desired capability to learn the dynamic spatio-temporal interactions of human actors and other objects implicitly from visual information itself. Toward this goal, we present a novel Time-Evolving Conditional cHaracter-centric graph (TECH) for long-term human-centric video understanding with application in Movie QA. TECH is inherently a recurrent system of the query-conditioned dynamic graph that evolves over time along the story and follows throughout the course of a movie clip. As aiming toward human-centric video understanding, TECH uses a two-stage feature refinement process to draw attention to human characters and their interactions while treating the interactions with non-human objects as contextual information. Tested on the large-scale TVQA dataset, TECH clearly shows advantages over recent state-of-the-art models. |
Long Dang · Thao Le · Vuong Le · Tu Minh Phuong · Truyen Tran 🔗 |
-
|
A Simple But Powerful Graph Encoder for Temporal Knowledge Graph Completion
(
Poster
)
link »
SlidesLive Video » Knowledge graphs contain rich knowledge about various entities and the relational information among them, while temporal knowledge graphs (TKGs) describe and model the interactions of the entities over time. In this context, automatic temporal knowledge graph completion (TKGC) has gained great interest. Recent TKGC methods integrate advanced deep learning techniques, e.g., Transformers, and achieve superior model performance. However, this also introduces a large number of excessive parameters, which brings a heavier burden for parameter optimization. In this paper, we propose a simple but powerful graph encoder for TKGC, called TARGCN. TARGCN is parameter-efficient, and it extensively explores every entity's temporal context for learning contextualized representations. We find that instead of adopting various kinds of complex modules, it is more beneficial to efficiently capture the temporal contexts of entities. We experiment TARGCN on three benchmark datasets. Our model can achieve a more than 46% relative improvement on the GDELT dataset compared with state-of-the-art TKGC models. Meanwhile, it outperforms the strongest baseline on the ICEWS05-15 dataset with around 18% fewer parameters. |
Zifeng Ding · Yunpu Ma · Bailan He · Zhen Han · Volker Tresp 🔗 |