Timezone: »
We study problem-dependent rates, i.e., generalization errors that scale tightly with the variance or the effective loss at the "best hypothesis." Existing uniform convergence and localization frameworks, the most widely used tools to study this problem, often fail to simultaneously provide parameter localization and optimal dependence on the sample size. As a result, existing problem-dependent rates are often rather weak when the hypothesis class is "rich" and the worst-case bound of the loss is large. In this paper we propose a new framework based on a "uniform localized convergence" principle. We provide the first (moment-penalized) estimator that achieves the optimal variance-dependent rate for general "rich" classes; we also establish improved loss-dependent rate for standard empirical risk minimization.
Author Information
Yunbei Xu (Columbia University)
Assaf Zeevi (Columbia University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Towards Problem-dependent Optimal Learning Rates »
Thu. Dec 10th 05:00 -- 07:00 PM Room Poster Session 5 #1399
More from the Same Authors
-
2021 Spotlight: A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms »
Anand Kalvit · Assaf Zeevi -
2022 Poster: Dynamic Learning in Large Matching Markets »
Anand Kalvit · Assaf Zeevi -
2022 Poster: Online Allocation and Learning in the Presence of Strategic Agents »
Steven Yin · Shipra Agrawal · Assaf Zeevi -
2021 Poster: A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms »
Anand Kalvit · Assaf Zeevi -
2020 Poster: From Finite to Countable-Armed Bandits »
Anand Kalvit · Assaf Zeevi -
2014 Poster: Stochastic Multi-Armed-Bandit Problem with Non-stationary Rewards »
Omar Besbes · Yonatan Gur · Assaf Zeevi