Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Self-supervised Learning for User Sequence Modeling

Yuhan Liu · Lin Ning · Neo Wu · Karan Singhal · Philip Mansfield · Devora Berlowitz · Bradley Green


Abstract:

Self-supervised learning (SSL) has proven to be very effective in learning representations from unlabeled data, especially in vision and NLP tasks. We aim to transfer this success to user sequence modeling, where users perform a sequence of actions from a large discrete domain (e.g. video views, movie ratings). Since the data type is completely different from images or natural language, we can no longer use pretrained foundation models and must find an efficient way to train from scratch. In this work, we propose an adaptation of Barlow Twins, with a suitable augmentation method and architecture for user sequence data. We evaluate our method on the MovieLens 1M, MovieLens 20M, and Yelp datasets, observing an 8%-20% improvement in accuracy on three downstream tasks compared to the dual encoder model, which is commonly used for user modeling in recommendation systems. Our method can help to learn useful sequence-level information for user modeling, and it is especially beneficial with limited labeled data.

Chat is not available.