A Study on Encodings for Neural Architecture Search
Colin White, Willie Neiswanger, Sam Nolen, Yash Savani
Spotlight presentation: Orals & Spotlights Track 33: Health/AutoML/(Soft|Hard)ware
on 2020-12-10T19:00:00-08:00 - 2020-12-10T19:10:00-08:00
on 2020-12-10T19:00:00-08:00 - 2020-12-10T19:10:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Neural architecture search (NAS) has been extensively studied in the past few years. A popular approach is to represent each neural architecture in the search space as a directed acyclic graph (DAG), and then search over all DAGs by encoding the adjacency matrix and list of operations as a set of hyperparameters. Recent work has demonstrated that even small changes to the way each architecture is encoded can have a significant effect on the performance of NAS algorithms. In this work, we present the first formal study on the effect of architecture encodings for NAS, including a theoretical grounding and an empirical study. First we formally define architecture encodings and give a theoretical characterization on the scalability of the encodings we study. Then we identify the main encoding-dependent subroutines which NAS algorithms employ, running experiments to show which encodings work best with each subroutine for many popular algorithms. The experiments act as an ablation study for prior work, disentangling the algorithmic and encoding-based contributions, as well as a guideline for future work. Our results demonstrate that NAS encodings are an important design decision which can have a significant impact on overall performance. Our code is available at https://github.com/naszilla/naszilla.