Timezone: »
Hyperparameter optimization (HPO) is a core problem for the machine learning community and remains largely unsolved due to the significant computational resources required to evaluate hyperparameter configurations. As a result, a series of recent related works have focused on the direction of transfer learning for quickly fine-tuning hyperparameters on a dataset. Unfortunately, the community does not have a common large-scale benchmark for comparing HPO algorithms. Instead, the de facto practice consists of empirical protocols on arbitrary small-scale meta-datasets that vary inconsistently across publications, making reproducibility a challenge. To resolve this major bottleneck and enable a fair and fast comparison of black-box HPO methods on a level playing field, we propose HPO-B, a new large-scale benchmark in the form of a collection of meta-datasets. Our benchmark is assembled and preprocessed from the OpenML repository and consists of 176 search spaces (algorithms) evaluated sparsely on 196 datasets with a total of 6.4 million hyperparameter evaluations. For ensuring reproducibility on our benchmark, we detail explicit experimental protocols, splits, and evaluation measures for comparing methods for both non-transfer, as well as, transfer learning HPO.
Author Information
Sebastian Pineda Arango (Albert-Ludwigs-Universität Freiburg)
Hadi Jomaa (University of Hildesheim)
Martin Wistuba (Amazon)
Josif Grabocka (Universität Freiburg)
More from the Same Authors
-
2021 : Transformers Can Do Bayesian-Inference By Meta-Learning on Prior-Data »
Samuel Müller · Noah Hollmann · Sebastian Pineda Arango · Josif Grabocka · Frank Hutter -
2021 : Transfer Learning for Bayesian HPO with End-to-End Landmark Meta-Features »
Hadi Jomaa · Sebastian Pineda Arango · Lars Schmidt-Thieme · Josif Grabocka -
2022 : Transfer NAS with Meta-learned Bayesian Surrogates »
Gresa Shala · Thomas Elsken · Frank Hutter · Josif Grabocka -
2022 : Gray-Box Gaussian Processes for Automated Reinforcement Learning »
Gresa Shala · André Biedenkapp · Frank Hutter · Josif Grabocka -
2022 : AutoRL-Bench 1.0 »
Gresa Shala · Sebastian Pineda Arango · André Biedenkapp · Frank Hutter · Josif Grabocka -
2022 : Bayesian Optimization with a Neural Network Meta-learned on Synthetic Data Only »
Samuel Müller · Sebastian Pineda Arango · Matthias Feurer · Josif Grabocka · Frank Hutter -
2022 Spotlight: Supervising the Multi-Fidelity Race of Hyperparameter Configurations »
Martin Wistuba · Arlind Kadra · Josif Grabocka -
2022 Spotlight: Lightning Talks 3B-1 »
Tianying Ji · Tongda Xu · Giulia Denevi · Aibek Alanov · Martin Wistuba · Wei Zhang · Yuesong Shen · Massimiliano Pontil · Vadim Titov · Yan Wang · Yu Luo · Daniel Cremers · Yanjun Han · Arlind Kadra · Dailan He · Josif Grabocka · Zhengyuan Zhou · Fuchun Sun · Carlo Ciliberto · Dmitry Vetrov · Mingxuan Jing · Chenjian Gao · Aaron Flores · Tsachy Weissman · Han Gao · Fengxiang He · Kunzan Liu · Wenbing Huang · Hongwei Qin -
2022 Poster: Supervising the Multi-Fidelity Race of Hyperparameter Configurations »
Martin Wistuba · Arlind Kadra · Josif Grabocka -
2022 Poster: Memory Efficient Continual Learning with Transformers »
Beyza Ermis · Giovanni Zappella · Martin Wistuba · Aditya Rawal · Cedric Archambeau -
2021 Poster: Well-tuned Simple Nets Excel on Tabular Datasets »
Arlind Kadra · Marius Lindauer · Frank Hutter · Josif Grabocka