Skip to yearly menu bar Skip to main content


Oral Poster

Enhancing Preference-based Linear Bandits via Human Response Time

Shen Li · Yuyang Zhang · Zhaolin Ren · Claire Liang · Na Li · Julie A Shah

East Exhibit Hall A-C #4901
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST
 
Oral presentation: Oral Session 2A: Agents
Wed 11 Dec 3:30 p.m. PST — 4:30 p.m. PST

Abstract:

Interactive preference learning systems present humans with queries as pairs of options; humans then select their preferred choice, allowing the system to infer preferences from these binary choices. While binary choice feedback is simple and widely used, it offers limited information about preference strength. To address this, we leverage human response times, which inversely correlate with preference strength, as complementary information. We introduce a computationally efficient method based on the EZ-diffusion model, combining choices and response times to estimate the underlying human utility function. Theoretical and empirical comparisons with traditional choice-only estimators show that for queries where humans have strong preferences (i.e., "easy" queries), response times provide valuable complementary information and enhance utility estimates. We integrate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that incorporating response times significantly accelerates preference learning.

Live content is unavailable. Log in and register to view live content