Skip to yearly menu bar Skip to main content


Poster

Tight Bounds for Learning RUMs from Small Slates

Flavio Chierichetti · Mirko Giacchini · Ravi Kumar · Alessandro Panconesi · Andrew Tomkins

West Ballroom A-D #5603
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: A Random Utility Model (RUM) is a classical model of user behavior defined by a distribution over $\mathbb{R}^n$. A user, presented with a subset of $\\{1,\ldots,n\\}$, will select the item of the subset with highest utility, according to a utility vector drawn from the distribution. In practical settings, the subset is often of small size, as in the ``ten blue links'' of web search. In this paper, we consider a learning setting with complete information on user choices from subsets of size at most $k$. We show that $k=\Theta(\sqrt{n})$ is necessary and sufficient to predict the distribution of all user choices within an arbitrarily small constant error.Based on the upper bound, we obtain new algorithms for approximate RUM learning and variations thereof. Furthermore, we employ our lower bound for approximate RUM learning to derive lower bounds to fractional extensions of the well-studied $k$-deck and trace reconstruction problems.

Live content is unavailable. Log in and register to view live content