Skip to yearly menu bar Skip to main content

Workshop: Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022

A Simple Framework for Active Learning to Rank

Qingzhong Wang · Haifang Li · Haoyi Xiong · Wen Wang · Jiang Bian · Yu Lu · Shuaiqiang Wang · zhicong cheng · Dawei Yin · Dejing Dou

Abstract: Learning to rank (LTR) plays a critical role in search engine---there needs to timely label an extremely large number of queries together with relevant webpages to train and update the online LTR models. To reduce the costs and time consumption of queries/webpages labeling, we study the problem of \emph{Active Learning to Rank} (\emph{\bf active LTR}) that selects unlabeled queries for annotation and training in this work. Specifically, we first investigate the criterion--\emph{Ranking Entropy (RE)} characterizing the entropy of relevant webpages under a query produced by a sequence of online LTR models updated by different checkpoints, using a Query-By-Committee (QBC) method. Then, we explore a new criterion namely \emph{Prediction Variances (PV)} that measures the variance of prediction results for all relevant webpages under a query. Our empirical studies find that RE may favor low-frequency queries from the pool for labeling while PV prioritizing high-frequency queries more. Finally, we combine these two complementary criteria as the sample selection strategies for active learning. Extensive experiments with comparisons to baseline algorithms show that the proposed approach could train LTR models achieving higher Discounted Cumulative Gain (\ie, the relative improvement $\Delta$DCG$_4$=1.38\%) with the same budgeted labeling efforts, while the proposed strategies could discover 43\% more valid training pairs for effective training.

Chat is not available.