NIPS 2015
Skip to yearly menu bar Skip to main content


In both stochastic and online learning we have a good theoretical
understanding of the most difficult learning tasks through worst-case
or minimax analysis, and we have algorithms to match. Yet there are
commonly occurring cases that are much easier than the worst case
where these methods are overly conservative, showing a large gap
between the performance predicted by theory and observed in
practice. Recent work has refined our theoretical understanding of the
wide spectrum of easy cases, leading to the development of algorithms
that are robust to the worst case, but can also automatically adapt to
easier data and achieve faster rates whenever possible.

Examples of easier cases include (Tsybakov) margin conditions, low
noise or variance, probabilistic Lipschitzness and empirical curvature
of the loss (strong convexity, exp-concavity, mixability), as well as
low-complexity decision boundaries and comparators, quantile bounds,
and cases with few switches among few leaders. Adapting to such easy
data often involves data-dependent bias-variance trade-offs through
hyper-parameter learning, adaptive regularisation or exploration, or
hypothesis testing to distinguish between easy and hard cases.

The last two years have seen many exciting new developments in the
form of new desirable adaptivity targets, new algorithms and new
analysis techniques. In this workshop we aim to bring together
researchers and practitioners interested in adaptation to easy
data. The key questions we will discuss are: Which are the
data-dependent trade-offs encountered (bias-variance or other)?
Can we identify commonalities across different problem domains in
strategies that are being used to deal with these trade-offs? And what
is the price for adaptivity (if any)? Both theoretical and empirical
insights are welcomed.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content