Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: WiML Workshop 1

Leveraging Resource Allocation and Approximation for Faster Hyperparameter Exploration

Lynn Liu


Abstract:

The test accuracy of machine learning models, such as deep neural networks, critically depends on hyperparameters related to model size (e.g., number of hidden layers) and model optimizer (e.g., learning rate). There has been much research on designing parallel Hyperparameter Exploration (HPE) algorithms, which concurrently train models configured with different hyperparameters and search for the best configuration. However, existing HPE algorithms have two limitations. First, most algorithms are synchronous parallel, and the exploration procedure can be significantly slowed down when candidate models have skewed training speeds. Second, existing algorithms are agnostic to resource availability and are not resilient to severe and dynamically changing resource constraints. In this study, we propose an HPE algorithm to overcome these limitations. First, our design exploits nearly lossless approximate training techniques (i.e., mixed-precision quantization and low-rank gradient compression) to reduce resource requirements and fit within runtime resource constraints. In addition, approximation speeds up slow-training models with reduced compute and network complexity. Second, our algorithm dynamically re-distributes compute and network resource allocation based on the significance of candidate models for maximal resource efficiency. Experiments show that our design achieves 1.5-3.2x speedup over existing synchronous and asynchronous HPE algorithms on real HPE traces.

Chat is not available.