Timezone: »

First De-Trend then Attend: Rethinking Attention for Time-Series Forecasting
Xiyuan Zhang · Xiaoyong Jin · Karthick Gopalswamy · Gaurav Gupta · Youngsuk Park · Xingjian Shi · Hao Wang · Danielle Maddix · Yuyang (Bernie) Wang

Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noises, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.

Author Information

Xiyuan Zhang (UC San Diego)
Xiaoyong Jin (Amazon)
Karthick Gopalswamy (AWS AI)
Gaurav Gupta (University of Southern California)
Youngsuk Park (Amazon, AWS AI Labs)
Xingjian Shi (HKUST)
Hao Wang (Rutgers University)
Danielle Maddix (Amazon Web Services)
Yuyang (Bernie) Wang (AWS AI Labs)

More from the Same Authors