Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Learning to Embed Time Series Patches Independently

Seunghan Lee · Taeyoung Park · Kibok Lee


Abstract:

Conventional masked time series modeling patchify and partially mask out time series (TS), and then train Transformers to capture the dependencies between patches by predicting masked patches from unmasked patches. However, we argue that capturing such dependencies might not be an optimal strategy for TS representation learning; rather, embedding patches independently results in better representations. Specifically, we propose to use 1) the patch reconstruction task, autoencoding each patch without looking at other patches, and 2) the MLP that embeds each patch independently. In addition, we introduce complementary contrastive learning to hierarchically capture adjacent TS information efficiently. Our proposed method improves various tasks compared to state-of-the-art Transformer-based models, while it is more efficient in terms of the number of parameters and training time.

Chat is not available.