`

Timezone: »

 
Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization
Blair Bilodeau · Jeffrey Negrea · Daniel M Roy

We introduce the semi-adversarial framework for sequential prediction with expert advice, where data are generated from distributions varying arbitrarily within an unknown constraint set. We quantify relaxations of the classical i.i.d. assumption along a spectrum induced by this framework, with i.i.d. sequences at one extreme and adversarial mechanisms at the other. The Hedge algorithm, which corresponds to using an expert-valued Bayesian power posterior to make decisions, was recently shown to be simultaneously optimal for both i.i.d. and adversarial data. We demonstrate that Hedge is suboptimal at all points of the spectrum in between these endpoints. Further, we introduce a novel algorithm and prove that it achieves the minimax optimal rate of regret at all points along the semi-adversarial spectrum---without advance knowledge of the constraint set. This algorithm corresponds to follow-the-regularized-leader, constructed by replacing the Shannon entropy regularizer of Hedge with the square-root of the Shannon entropy.

Author Information

Blair Bilodeau (University of Toronto)
Jeffrey Negrea (University of Toronto)
Daniel M Roy (University of Toronto)

More from the Same Authors