Timezone: »
The analysis of spatio-temporal sequences plays an important role in many real-world applications, demanding a high model capacity to capture the interdependence among spatial and temporal dimensions. Previous studies provided separated network design in three categories: spatial first, temporal first, and spatio-temporal synchronous. However, the manually-designed heterogeneous models can hardly meet the spatio-temporal dependency capturing priority for various tasks. To address this, we proposed a universal modeling framework with three distinctive characteristics: (i) Attention-based network backbone, including S2T Layer (spatial first), T2S Layer (temporal first), and STS Layer (spatio-temporal synchronous). (ii) The universal modeling framework, named UniST, with a unified architecture that enables flexible modeling priorities with the proposed three different modules. (iii) An automatic search strategy, named AutoST, automatically searches the optimal spatio-temporal modeling priority by network architecture search. Extensive experiments on five real-world datasets demonstrate that UniST with any single type of our three proposed modules can achieve state-of-the-art performance. Furthermore, AutoST can achieve overwhelming performance with UniST.
Author Information
Jianxin Li (Beihang University)
Shuai Zhang (Beihang University)
Hui Xiong
Haoyi Zhou (Beihang University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: AutoST: Towards the Universal Modeling of Spatio-temporal Sequences »
Thu. Dec 1st 05:00 -- 07:00 PM Room Hall J #627
More from the Same Authors
-
2022 Spotlight: Lightning Talks 2B-3 »
Jie-Jing Shao · Jiangmeng Li · Jiashuo Liu · Zongbo Han · Tianyang Hu · Jiayun Wu · Wenwen Qiang · Jun WANG · Zhipeng Liang · Lan-Zhe Guo · Wenjia Wang · Yanan Zhang · Xiao-wen Yang · Fan Yang · Bo Li · Wenyi Mo · Zhenguo Li · Liu Liu · Peng Cui · Yu-Feng Li · Changwen Zheng · Lanqing Li · Yatao Bian · Bing Su · Hui Xiong · Peilin Zhao · Bingzhe Wu · Changqing Zhang · Jianhua Yao -
2022 Spotlight: Lightning Talks 2B-2 »
Chenjian Gao · Rui Ding · Lingzhi LI · Fan Yang · Xingting Yao · Jianxin Li · Bing Su · Zhen Shen · Tongda Xu · Shuai Zhang · Ji-Rong Wen · Lin Guo · Fanrong Li · Kehua Guo · Zhongshu Wang · Zhi Chen · Xiangyuan Zhu · Zitao Mo · Dailan He · Hui Xiong · Yan Wang · Zheng Wu · Wenbing Tao · Jian Cheng · Haoyi Zhou · Li Shen · Ping Tan · Liwei Wang · Hongwei Qin -
2022 Spotlight: MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning »
Jiangmeng Li · Wenwen Qiang · Yanan Zhang · Wenyi Mo · Changwen Zheng · Bing Su · Hui Xiong -
2022 Poster: Jump Self-attention: Capturing High-order Statistics in Transformers »
Haoyi Zhou · Siyang Xiao · Shanghang Zhang · Jieqi Peng · Shuai Zhang · Jianxin Li -
2021 Poster: Discerning Decision-Making Process of Deep Neural Networks with Hierarchical Voting Transformation »
Ying Sun · Hengshu Zhu · Chuan Qin · Fuzhen Zhuang · Qing He · Hui Xiong -
2021 Poster: Topic Modeling Revisited: A Document Graph-based Neural Network Perspective »
Dazhong Shen · Chuan Qin · Chao Wang · Zheng Dong · Hengshu Zhu · Hui Xiong