Skip to yearly menu bar Skip to main content

Workshop: Your Model is Wrong: Robustness and misspecification in probabilistic modeling

Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization

Blair Bilodeau · Jeffrey Negrea · Dan Roy


We introduce the semi-adversarial framework for sequential prediction with expert advice, where data are generated from distributions varying arbitrarily within an unknown constraint set. We quantify relaxations of the classical i.i.d. assumption along a spectrum induced by this framework, with i.i.d. sequences at one extreme and adversarial mechanisms at the other. The Hedge algorithm, which corresponds to using an expert-valued Bayesian power posterior to make decisions, was recently shown to be simultaneously optimal for both i.i.d. and adversarial data. We demonstrate that Hedge is suboptimal at all points of the spectrum in between these endpoints. Further, we introduce a novel algorithm and prove that it achieves the minimax optimal rate of regret at all points along the semi-adversarial spectrum---without advance knowledge of the constraint set. This algorithm corresponds to follow-the-regularized-leader, constructed by replacing the Shannon entropy regularizer of Hedge with the square-root of the Shannon entropy.

Chat is not available.