Timezone: »

 
Poster
Adapting Neural Architectures Between Domains
Yanxi Li · Zhaohui Yang · Yunhe Wang · Chang Xu

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1782

Neural architecture search (NAS) has demonstrated impressive performance in automatically designing high-performance neural networks. The power of deep neural networks is to be unleashed for analyzing a large volume of data (e.g. ImageNet), but the architecture search is often executed on another smaller dataset (e.g. CIFAR-10) to finish it in a feasible time. However, it is hard to guarantee that the optimal architecture derived on the proxy task could maintain its advantages on another more challenging dataset. This paper aims to improve the generalization of neural architectures via domain adaptation. We analyze the generalization bounds of the derived architecture and suggest its close relations with the validation error and the data distribution distance on both domains. These theoretical analyses lead to AdaptNAS, a novel and principled approach to adapt neural architectures between domains in NAS. Our experimental evaluation shows that only a small part of ImageNet will be sufficient for AdaptNAS to extend its architecture success to the entire ImageNet and outperform state-of-the-art comparison algorithms.

Author Information

Yanxi Li (University of Sydney)
Zhaohui Yang (peking university)
Yunhe Wang (Huawei Noah's Ark Lab)
Chang Xu (University of Sydney)

More from the Same Authors