Timezone: »

Stability Bounds for Non-i.i.d. Processes
Mehryar Mohri · Afshin Rostamizadeh

Mon Dec 03 10:30 AM -- 10:40 AM (PST) @

The notion of algorithmic stability has been used effectively in the past to derive tight generalization bounds. A key advantage of these bounds is that they are de- signed for specific learning algorithms, exploiting their particular properties. But, as in much of learning theory, existing stability analyses and bounds apply only in the scenario where the samples are independently and identically distributed (i.i.d.). In many machine learning applications, however, this assumption does not hold. The observations received by the learning algorithm often have some inherent temporal dependence, which is clear in system diagnosis or time series prediction problems. This paper studies the scenario where the observations are drawn from a station- ary beta-mixing sequence, which implies a dependence between observations that weaken over time. It proves novel stability-based generalization bounds that hold even with this more general setting. These bounds strictly generalize the bounds given in the i.i.d. case. We also illustrate their application in the case of several general classes of learning algorithms, including Support Vector Regression and Kernel Ridge Regression.

Author Information

Mehryar Mohri (Google Research & Courant Institute of Mathematical Sciences)
Afshin Rostamizadeh (Google Research)

More from the Same Authors