Poster
Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin
Ilias Diakonikolas · Daniel Kane · Pasin Manurangsi
East Exhibition Hall B, C #233
Keywords: [ Learning Theory ] [ Theory ] [ Hardness of Learning and Approximations ]
[
Abstract
]
Abstract:
We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model.
In more detail, we study the complexity of properly learning d-dimensional halfspaces
on the unit ball within misclassification error α⋅\optγ+\eps, where \optγ
is the optimal γ-margin error rate and α≥1 is the approximation ratio.
We give learning algorithms and computational hardness results
for this problem, for all values of the approximation ratio α≥1,
that are nearly-matching for a range of parameters.
Specifically, for the natural setting that α is any constant bigger
than one, we provide an essentially tight complexity characterization.
On the positive side, we give an α=1.01-approximate proper learner
that uses O(1/(\eps2γ2)) samples (which is optimal) and runs in time
\poly(d/\eps)⋅2˜O(1/γ2). On the negative side,
we show that {\em any} constant factor approximate proper learner has runtime
\poly(d/\eps)⋅2(1/γ)2−o(1),
assuming the Exponential Time Hypothesis.
Live content is unavailable. Log in and register to view live content