Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training

MC-DARTS : Model Size Constrained Differentiable Architecture Search

Kazuki Hemmi · Yuki Tanigaki · Masaki Onishi


Abstract:

Recently, extensive research has been conducted on automated machine learning(AutoML). Neural architecture search (NAS) in AutoML is a crucial method for automatically optimizing neural network architectures according to applying data and its usage. One of the prospected ways to search for a high accuracy model is the gradient method NAS, known as differentiable architecture search (DARTS). Previous DARTS-based studies have proposed that the size of the optimal architecture depends on the size of the dataset. If the optimal size of the architecture is small, the search for a large model size architecture is unnecessary. The size of the architectures must be considered when deep learning is used on mobile devices and embedded systems since the memory on these platforms is limited. Therefore, in this paper, we propose a novel approach, known as model size constrained DARTS. The proposed approach adds constraints to DARTS to search for a network architecture, considering the accuracy and the model size. As a result, the proposed method can efficiently search for network architectures with short training time and high accuracy under constrained conditions.

Chat is not available.