Timezone: »
Poster
Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition
Lin Chen · Qian Yu · Hannah Lawrence · Amin Karbasi
We study the problem of switching-constrained online convex optimization (OCO), where the player has a limited number of opportunities to change her action. While the discrete analog of this online learning task has been studied extensively, previous work in the continuous setting has neither established the minimax rate nor algorithmically achieved it. In this paper, we show that $ T $-round switching-constrained OCO with fewer than $ K $ switches has a minimax regret of $ \Theta(\frac{T}{\sqrt{K}}) $. In particular, it is at least $ \frac{T}{\sqrt{2K}} $ for one dimension and at least $ \frac{T}{\sqrt{K}} $ for higher dimensions. The lower bound in higher dimensions is attained by an orthogonal subspace argument. In one dimension, a novel adversarial strategy yields the lower bound of $O(\frac{T}{\sqrt{K}})$, but a precise minimax analysis including constants is more involved. To establish the tighter one-dimensional result, we introduce the \emph{fugal game} relaxation, whose minimax regret lower bounds that of switching-constrained OCO. We show that the minimax regret of the fugal game is at least $ \frac{T}{\sqrt{2K}} $ and thereby establish the optimal minimax lower bound in one dimension. To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$. This is in sharp contrast to its discrete counterpart, the switching-constrained prediction-from-experts problem, which exhibits a phase transition in minimax regret between the low-switching and high-switching regimes.
Author Information
Lin Chen (University of California, Berkeley)
Qian Yu (University of Southern California)
Hannah Lawrence (Flatiron Institute)
Amin Karbasi (Yale)
More from the Same Authors
-
2021 Poster: Multiple Descent: Design Your Own Generalization Curve »
Lin Chen · Yifei Min · Mikhail Belkin · Amin Karbasi -
2020 Poster: Submodular Maximization Through Barrier Functions »
Ashwinkumar Badanidiyuru · Amin Karbasi · Ehsan Kazemi · Jan Vondrak -
2020 Poster: Continuous Submodular Maximization: Beyond DR-Submodularity »
Moran Feldman · Amin Karbasi -
2020 Spotlight: Submodular Maximization Through Barrier Functions »
Ashwinkumar Badanidiyuru · Amin Karbasi · Ehsan Kazemi · Jan Vondrak -
2020 Poster: Online MAP Inference of Determinantal Point Processes »
Aditya Bhaskara · Amin Karbasi · Silvio Lattanzi · Morteza Zadimoghaddam -
2019 Poster: Adaptive Sequence Submodularity »
Marko Mitrovic · Ehsan Kazemi · Moran Feldman · Andreas Krause · Amin Karbasi -
2019 Poster: Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback »
Mingrui Zhang · Lin Chen · Hamed Hassani · Amin Karbasi -
2019 Poster: Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match »
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen -
2018 : Posters (all accepted papers) + Break »
Jianyu Wang · Denis Gudovskiy · Ziheng Jiang · Michael Kaufmann · Andreea Anghel · James Bradbury · Nikolas Ioannou · Nitin Agrawal · Emma Tosch · Gyeongin Yu · Keno Fischer · Jarrett Revels · Giuseppe Siracusano · Yaoqing Yang · Jeff Johnson · Yang You · Hector Yuen · Chris Ying · Honglei Liu · Nikoli Dryden · Xiangxi Mo · Yangzihao Wang · Amit Juneja · Micah Smith · Qian Yu · pramod gupta · Deepak Narayanan · Keshav Santhanam · Tim Capes · Abdul Dakkak · Norman Mu · Ke Deng · Liam Li · Joao Carreira · Luis Remis · Deepti Raghavan · Una-May O'Reilly · Amanpreet Singh · Mahmoud (Mido) Assran · Eugene Wu · Eytan Bakshy · Jinliang Wei · Michael Innes · Viral Shah · Haibin Lin · Conrad Sanderson · Ryan Curtin · Marcus Edel -
2018 Poster: Do Less, Get More: Streaming Submodular Maximization with Subsampling »
Moran Feldman · Amin Karbasi · Ehsan Kazemi -
2018 Spotlight: Do Less, Get More: Streaming Submodular Maximization with Subsampling »
Moran Feldman · Amin Karbasi · Ehsan Kazemi -
2017 Workshop: Discrete Structures in Machine Learning »
Yaron Singer · Jeff A Bilmes · Andreas Krause · Stefanie Jegelka · Amin Karbasi -
2017 Poster: Interactive Submodular Bandit »
Lin Chen · Andreas Krause · Amin Karbasi -
2017 Poster: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Oral: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Poster: Gradient Methods for Submodular Maximization »
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi -
2016 Poster: Estimating the Size of a Large Network and its Communities from a Random Sample »
Lin Chen · Amin Karbasi · Forrest W. Crawford -
2016 Poster: Fast Distributed Submodular Cover: Public-Private Data Summarization »
Baharan Mirzasoleiman · Morteza Zadimoghaddam · Amin Karbasi -
2015 Poster: Distributed Submodular Cover: Succinctly Summarizing Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Ashwinkumar Badanidiyuru · Andreas Krause -
2015 Spotlight: Distributed Submodular Cover: Succinctly Summarizing Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Ashwinkumar Badanidiyuru · Andreas Krause