Skip to yearly menu bar Skip to main content


Oral Poster

Enhancing Preference-based Linear Bandits via Human Response Time

Shen Li · Yuyang Zhang · Zhaolin Ren · Claire Liang · Na Li · Julie A Shah

East Exhibit Hall A-C #4901
[ ]
[ Paper [ Slides [ Poster [ OpenReview
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST
 
Oral presentation: Oral Session 2A: Agents
Wed 11 Dec 3:30 p.m. PST — 4:30 p.m. PST

Abstract:

Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely related to preference strength, as an additional signal. We propose a computationally efficient method that combines choices and response times to estimate human utility functions, grounded in the EZ diffusion model from psychology. Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation. We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that using response times significantly accelerates preference learning compared to choice-only approaches. Additional materials, such as code, slides, and talk video, are available at https://shenlirobot.github.io/pages/NeurIPS24.html.

Chat is not available.