Skip to yearly menu bar Skip to main content


Poster

Online Forecasting of Total-Variation-bounded Sequences

Dheeraj Baby · Yu-Xiang Wang

East Exhibition Hall B, C #22

Keywords: [ Online Learning ] [ Algorithms ] [ Algorithms -> Regression; Applications -> Denoising; Applications -> Signal Processing; Applications ] [ Time Series Analysis; O ]


Abstract: We consider the problem of online forecasting of sequences of length n with total-variation at most Cn using observations contaminated by independent σ-subgaussian noise. We design an O(nlogn)-time algorithm that achieves a cumulative square error of ~O(n1/3C2/3nσ4/3+C2n) with high probability. We also prove a lower bound that matches the upper bound in all parameters (up to a log(n) factor). To the best of our knowledge, this is the first **polynomial-time** algorithm that achieves the optimal O(n1/3) rate in forecasting total variation bounded sequences and the first algorithm that **adapts to unknown** Cn.Our proof techniques leverage the special localized structure of Haar wavelet basis and the adaptivity to unknown smoothness parameters in the classical wavelet smoothing [Donoho et al., 1998]. We also compare our model to the rich literature of dynamic regret minimization and nonstationary stochastic optimization, where our problem can be treated as a special case. We show that the workhorse in those settings --- online gradient descent and its variants with a fixed restarting schedule --- are instances of a class of **linear forecasters** that require a suboptimal regret of ~Ω(n). This implies that the use of more adaptive algorithms is necessary to obtain the optimal rate.

Live content is unavailable. Log in and register to view live content