Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: WiML Workshop 1

Self-Supervised Visual Representation Learning for Time-series Clustering

Gaurangi Anand · Richi Nayak


Abstract:

Self-supervised and transfer learning has been demonstrated to lead to more generalized solutions than supervised learning regimes, recently reinforced by advances in both computer vision and natural language processing. Here, we present a simple yet effective method that learns meaningful representations for 1D time-series data through their 2D visual patterns without any external supervision. This is motivated by two factors: 1) supervision-disability, either due to lack of labelled data or lack of supervisory signal such as for exploratory data analysis, and 2) human-basis, emulating a data scientist's visual perception to obtain visualization-based insights from data that is inherently not necessarily 2D/image type. These are named Learned Deep Visual Representations (LDVR) for time-series. We first convert 1D time-series signals into 2D images, followed by self-supervised contrastive learning using pre-trained 2D CNNs to obtain time-series representations. The generalizability is demonstrated through diverse time-series datasets for the unsupervised task of clustering, where no prior knowledge of instance labels is utilized. The learnt representations lend themselves to more meaningful time-series clusters, validated through quantitative and qualitative analyses on the UCR times-series benchmark.

Chat is not available.