Poster
in
Workshop: Adaptive Experimental Design and Active Learning in the Real World
Improved Bounds for Agnostic Active Learning of Single Index Models
Aarshvi Gajjar · Xingyu Xu · Christopher Musco · Chinmay Hegde
Abstract:
We study methods for actively learning single index models of the form , where and . Such functions are important in scientific computing, where they are used to construct surrogate models for partial differential equations (PDEs) and to approximate high-dimensional Quantities of Interest (QoIs). In these applications, collecting function samples requires solving a partial differential equation, so sample-efficient active learning methods translate to reduced computational cost.Our work provides two main results. First, when is known and Lipschitz, we show that samples collected via \emph{statistical leverage score sampling} are sufficient to find an optimal single index model for a given target function, even in the challenging and practically important agnostic (adversarial noise) setting. This result is optimal up to logarithmic factors and improves quadratically on a recent bound of Gajjar et al. (2023). Second, we show that samples suffice in the more difficult non-parametric setting when is \emph{unknown}, which is the also best result known in this general setting.
Chat is not available.