Skip to yearly menu bar Skip to main content


Spotlight Poster

Honor Among Bandits: No-Regret Learning for Online Fair Division

Ariel Procaccia · Ben Schiffer · Shirley Zhang

West Ballroom A-D #6607
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: We consider the problem of online fair division of indivisible goods to players when there are a finite number of types of goods and player values are drawn from distributions with unknown means. Our goal is to maximize social welfare subject to allocating the goods fairly in expectation. When a player's value for an item is unknown at the time of allocation, we show that this problem reduces to a variant of (stochastic) multi-armed bandits, where there exists an arm for each player's value for each type of good. At each time step, we choose a distribution over arms which determines how the next item is allocated. We consider two sets of fairness constraints for this problem: envy-freeness in expectation and proportionality in expectation. Our main result is the design of an explore-then-commit algorithm that achieves $\tilde{O}(T^{2/3})$ regret while maintaining either fairness constraint. This result relies on unique properties fundamental to fair-division constraints that allow faster rates of learning, despite the restricted action space.

Live content is unavailable. Log in and register to view live content