Skip to yearly menu bar Skip to main content


Competition

AutoML Decathlon: Diverse Tasks, Modern Methods, and Efficiency at Scale

Samuel Guo · Cong Xu · Nicholas Roberts · Misha Khodak · Junhong Shen · Evan Sparks · Ameet Talwalkar · Yuriy Nevmyvaka · Frederic Sala · Anderson Schneider

Virtual

Abstract:

As more areas beyond the traditional AI domains (e.g., computer vision and natural language processing) seek to take advantage of data-driven tools, the need for developing ML systems that can adapt to a wide range of downstream tasks in an efficient and automatic way continues to grow. The AutoML for the 2020s competition aims to catalyze research in this area and establish a benchmark for the current state of automated machine learning. Unlike previous challenges which focus on a single class of methods such as non-deep-learning AutoML, hyperparameter optimization, or meta-learning, this competition proposes to (1) evaluate automation on a diverse set of small and large-scale tasks, and (2) allow the incorporation of the latest methods such as neural architecture search and unsupervised pretraining. To this end, we curate 20 datasets that represent a broad spectrum of practical applications in scientific, technological, and industrial domains. Participants are given a set of 10 development tasks selected from these datasets and are required to come up with automated programs that perform well on as many problems as possible and generalize to the remaining private test tasks. To ensure efficiency, the evaluation will be conducted under a fixed computational budget. To ensure robustness, the performance profiles methodology is used for determining the winners. The organizers will provide computational resources to the participants as needed and monetary prizes to the winners.

Schedule