Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Adaptive Experimental Design and Active Learning in the Real World

Improved Bounds for Agnostic Active Learning of Single Index Models

Aarshvi Gajjar · Xingyu Xu · Christopher Musco · Chinmay Hegde


Abstract: We study methods for actively learning single index models of the form $F({\boldsymbol x}) = f(\langle {\boldsymbol w}, {\boldsymbol x}\rangle)$, where $f:\mathbb{R} \to \mathbb{R}$ and ${\boldsymbol w} \in \mathbb{R}^d$. Such functions are important in scientific computing, where they are used to construct surrogate models for partial differential equations (PDEs) and to approximate high-dimensional Quantities of Interest (QoIs). In these applications, collecting function samples requires solving a partial differential equation, so sample-efficient active learning methods translate to reduced computational cost.Our work provides two main results. First, when $f$ is known and Lipschitz, we show that $\tilde{O}(d)$ samples collected via \emph{statistical leverage score sampling} are sufficient to find an optimal single index model for a given target function, even in the challenging and practically important agnostic (adversarial noise) setting. This result is optimal up to logarithmic factors and improves quadratically on a recent $\tilde{O}(d^{2})$ bound of Gajjar et al. (2023). Second, we show that $\tilde{O}(d^{3/2})$ samples suffice in the more difficult non-parametric setting when $f$ is \emph{unknown}, which is the also best result known in this general setting.

Chat is not available.