Timezone: »

Bayesian Batch Active Learning as Sparse Subset Approximation
Robert Pinsler · Jonathan Gordon · Eric Nalisnick · José Miguel Hernández-Lobato

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #2

Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models. When the cost of acquiring labels is high, probabilistic active learning methods can be used to greedily select the most informative data points to be labeled. However, for many large-scale problems standard greedy procedures become computationally infeasible and suffer from negligible model change. In this paper, we introduce a novel Bayesian batch active learning approach that mitigates these issues. Our approach is motivated by approximating the complete data posterior of the model parameters. While naive batch construction methods result in correlated queries, our algorithm produces diverse batches that enable efficient active learning at scale. We derive interpretable closed-form solutions akin to existing active learning procedures for linear models, and generalize to arbitrary models using random projections. We demonstrate the benefits of our approach on several large-scale regression and classification tasks.

Author Information

Robert Pinsler (University of Cambridge)
Jonathan Gordon (University of Cambridge)
Eric Nalisnick (University of Cambridge & DeepMind)
Jose Miguel Hernández-Lobato (University of Cambridge)

More from the Same Authors