Skip to yearly menu bar Skip to main content


Oral Poster

Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework

Zhongchao Yi · Zhengyang Zhou · Qihe Huang · Yanjiang Chen · Liheng Yu · Xu Wang · Yang Wang

East Exhibit Hall A-C #4300
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 5D: Machine Learning and Science
Fri 13 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

Spatiotemporal learning has become a pivotal technique to enable urban intelligence. Traditional spatiotemporal models mostly focus on a specific task by assuming a same distribution between training and testing sets. However, given that urban systems are usually dynamic, multi-sourced with imbalanced data distributions, current specific task-specific models fail to generalize to new urban conditions and adapt to new domains without explicitly modeling interdependencies across various dimensions and types of urban data. To this end, we argue that there is an essential to propose a Continuous Multi-task Spatio-Temporal learning framework (CMuST) to empower collective urban intelligence, which reforms the urban spatiotemporal learning from single-domain to cooperatively multi-dimensional and multi-task learning. Specifically, CMuST proposes a new multi-dimensional spatiotemporal interaction network (MSTI) to allow cross-interactions between context and main observations as well as self-interactions within spatial and temporal aspects to be exposed, which is also the core for capturing task-level commonality and personalization. To ensure continuous task learning, a novel Rolling Adaptation training scheme (RoAda) is devised, which not only preserves task uniqueness by constructing data summarization-driven task prompts, but also harnesses correlated patterns among tasks by iterative model behavior modeling. We further establish a benchmark of three cities for multi-task spatiotemporal learning, and empirically demonstrate the superiority of CMuST via extensive evaluations on these datasets. The impressive improvements on both few-shot streaming data and new domain tasks against existing SOAT methods are achieved. Code is available at https://github.com/DILab-USTCSZ/CMuST.

Live content is unavailable. Log in and register to view live content