Timezone: »
Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy black box functions. One example in machine learning is hyper-parameter optimization where each evaluation of the target function may require training a model which may involve days or even weeks of computation. Most methods for this so-called “Bayesian optimization” only allow sequential exploration of the parameter space. However, it is often desirable to propose batches or sets of parameter values to explore simultaneously, especially when there are large parallel processing facilities at our disposal. Batch methods require modeling the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. In this paper, we propose a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically. This allows us to generalize a previous result as well as prove better regret bounds based on DPP sampling. Our experiments on a variety of synthetic and real-world robotics and hyper-parameter optimization tasks indicate that our DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.
Author Information
Tarun Kathuria (Microsoft Research)
Amit Deshpande (Microsoft Research)
Pushmeet Kohli (Microsoft Research)
More from the Same Authors
-
2023 Poster: Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes »
Abhinav Kumar · Amit Deshpande · Amit Sharma -
2021 Poster: Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks »
Sandesh Kamath · Amit Deshpande · Subrahmanyam Kambhampati Venkata · Vineeth N Balasubramanian -
2020 : Contributed Talk 1: The Importance of Modeling Data Missingness in Algorithmic Fairness »
Naman Goel · Amit Deshpande -
2017 : Pushmeet Kohli »
Pushmeet Kohli -
2017 Poster: Learning Disentangled Representations with Semi-Supervised Deep Generative Models »
Siddharth Narayanaswamy · Brooks Paige · Jan-Willem van de Meent · Alban Desmaison · Noah Goodman · Pushmeet Kohli · Frank Wood · Philip Torr -
2016 Poster: PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions »
Mikhail Figurnov · Aizhan Ibraimova · Dmitry Vetrov · Pushmeet Kohli -
2016 Poster: Adaptive Neural Compilation »
Rudy Bunel · Alban Desmaison · Pawan K Mudigonda · Pushmeet Kohli · Philip Torr -
2015 Poster: Efficient Non-greedy Optimization of Decision Trees »
Mohammad Norouzi · Maxwell Collins · Matthew A Johnson · David Fleet · Pushmeet Kohli -
2015 Poster: Deep Convolutional Inverse Graphics Network »
Tejas Kulkarni · William Whitney · Pushmeet Kohli · Josh Tenenbaum -
2015 Spotlight: Deep Convolutional Inverse Graphics Network »
Tejas Kulkarni · William Whitney · Pushmeet Kohli · Josh Tenenbaum -
2014 Poster: Just-In-Time Learning for Fast and Flexible Inference »
S. M. Ali Eslami · Danny Tarlow · Pushmeet Kohli · John Winn -
2013 Poster: Decision Jungles: Compact and Rich Models for Classification »
Jamie Shotton · Toby Sharp · Pushmeet Kohli · Sebastian Nowozin · John Winn · Antonio Criminisi -
2012 Poster: Multiple Choice Learning: Learning to Produce Multiple Structured Outputs »
Abner Guzmán-Rivera · Dhruv Batra · Pushmeet Kohli -
2012 Poster: Context-Sensitive Decision Forests for Object Detection »
Peter Kontschieder · Samuel Rota Bulò · Antonio Criminisi · Pushmeet Kohli · Marcello Pelillo · Horst Bischof -
2011 Poster: Higher-Order Correlation Clustering for Image Segmentation »
Sungwoong Kim · Sebastian Nowozin · Pushmeet Kohli · Chang D. D Yoo -
2009 Poster: Local Rules for Global MAP: When Do They Work ? »
Kyomin Jung · Pushmeet Kohli · Devavrat Shah