Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Self-Distilled Representation Learning for Time Series

Felix Pieper · Konstantin Ditschuneit · Martin Genzel · Alexandra Lindt · Johannes Otterbach


Abstract:

Self-supervised learning for time-series data holds potential similar to that recently unleashed in Natural Language Processing and Computer Vision. While most existing works in this area focus on contrastive learning, we propose a conceptually simple yet powerful non-contrastive approach, based on the data2vec self-distillation framework. The core of our method is a student-teacher scheme that predicts the latent representation of an input time series from masked views of the same time series. This strategy avoids strong modality-specific assumptions and biases typically introduced by the design of contrastive sample pairs. We demonstrate the competitiveness of our approach for classification and forecasting as downstream tasks, comparing with state-of-the-art self-supervised learning methods on the UCR and UEA archives as well as the ETT and Electricity datasets.

Chat is not available.