Timezone: »

Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss
Stephen Mussmann · Percy Liang

Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #132

Uncertainty sampling, a popular active learning algorithm, is used to reduce the amount of data required to learn a classifier, but it has been observed in practice to converge to different parameters depending on the initialization and sometimes to even better parameters than standard training on all the data. In this work, we give a theoretical explanation of this phenomenon, showing that uncertainty sampling on a convex (e.g., logistic) loss can be interpreted as performing a preconditioned stochastic gradient step on the population zero-one loss. Experiments on synthetic and real datasets support this connection.

Author Information

Stephen Mussmann (Stanford University)
Percy Liang (Stanford University)

More from the Same Authors