Timezone: »

Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms
Alexander Wei · Fred Zhang

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #424

We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions. The goal is to design algorithms that are both consistent and robust, meaning that the algorithm performs well when predictions are accurate and maintains worst-case guarantees. Such algorithms have been studied in a recent line of works due to Lykouris and Vassilvitskii (ICML '18) and Purohit et al (NeurIPS '18). They provide robustness-consistency trade-offs for a variety of online problems. However, they leave open the question of whether these trade-offs are tight, i.e., to what extent to such trade-offs are necessary. In this paper, we provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions. We focus on the classic problems of ski-rental and non-clairvoyant scheduling and provide optimal trade-offs in various settings.

Author Information

Alexander Wei (UC Berkeley)
Fred Zhang (UC Berkeley)

I am a PhD student in the Theory Group of the EECS Department at UC Berkeley, advised by Jelani Nelson. My research lies broadly in algorithm design. I am particularly interested in questions arising from high-dimensional statistics, machine learning, and processing massive data.

More from the Same Authors