Skip to yearly menu bar Skip to main content


Poster

Beyond task diversity: provable representation transfer for sequential multitask linear bandits

Thang Duong · Zhi Wang · Chicheng Zhang

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks whose parameters lie in an $m$-dimensional subspace of $\mathbb{R}^d$, thereby sharing a low-rank representation. Current literature typically assumes that the tasks are diverse; that is, their parameters uniformly span the $m$-dimensional subspace. This assumption allows the low-rank representation to be learned before all tasks are revealed, which can be unrealistic in real-world applications. In this work, we consider a setting extending beyond the task diversity assumption. We present an algorithm that can efficiently learn and transfer low-rank representations. When facing $N$ tasks each played over $\tau$ rounds, our algorithm achieves a regret guarantee of $\tilde{O}\left (Nm \sqrt{\tau} + N^{\frac{2}{3}} \tau^{\frac{2}{3}} d m^{\frac13} \right)$ under mild assumptions. This improves upon the baseline $\tilde{O} \left (Nd \sqrt{\tau}\right)$ that does not leverage the low-rank structure. We demonstrate empirically on synthetic data that our algorithm outperforms baseline algorithms that rely on the task diversity assumption.

Live content is unavailable. Log in and register to view live content