Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022

Achieving Diversity and Relevancy in Zero-Shot Recommender Systems for Human Evaluations

Tiancheng Yu · Yifei Ma · Anoop Deoras


Abstract:

Recommender systems (RecSys) often require user-behavioral data to learn good preference patterns. However, the user data is often collected by a working RecSys in the first place. This creates a gap where we hope to establish general recommendation patterns without relying on user data first, while the performance is then evaluated by real human oracles. On top of that, we aim to introduce diversity in the recommendation results, based on uncertainty principles to yield good trade-offs between recommendation coverage and relevancy.Assuming that we have a corpus of item descriptions for all the items in our recommendation catalog, we propose two methods based on pretrained large language models (LLMs): Bert Corpus Tuning (Bert-CT) and Bert Variational Corpus Tuning (Bert-VarCT). Here, Bert-CT is responsible for adapting Bert to attend to domain-specific word tokens in the corpus of the item descriptions and Bert-VarCT is used to introduce diversity without significant changes in the network designs. We show that both methods achieved our designed goals, measured by data from real humans on a crowd-sourcing platform. Additionally, our approach is general and minimalistic. We release our codes for reproducibility and extensibility at \url{https://github.com/awslabs/crowd-coachable-recommendations}

Chat is not available.