Timezone: »

 
Poster
AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient Hyper-parameter Tuning
Krishnateja Killamsetty · Guttu Sai Abhishek · Aakriti Lnu · Ganesh Ramakrishnan · Alexandre Evfimievski · Lucian Popa · Rishabh Iyer

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #120

Deep neural networks have seen great success in recent years; however, training a deep model is often challenging as its performance heavily depends on the hyper-parameters used. In addition, finding the optimal hyper-parameter configuration, even with state-of-the-art (SOTA) hyper-parameter optimization (HPO) algorithms, can be time-consuming, requiring multiple training runs over the entire datasetfor different possible sets of hyper-parameters. Our central insight is that using an informative subset of the dataset for model training runs involved in hyper-parameter optimization, allows us to find the optimal hyper-parameter configuration significantly faster. In this work, we propose AUTOMATA, a gradient-based subset selection framework for hyper-parameter tuning. We empirically evaluate the effectiveness of AUTOMATA in hyper-parameter tuning through several experiments on real-world datasets in the text, vision, and tabular domains. Our experiments show that using gradient-based data subsets for hyper-parameter tuning achieves significantly faster turnaround times and speedups of 3×-30× while achieving comparable performance to the hyper-parameters found using the entire dataset.

Author Information

Krishnateja Killamsetty (University of Texas, Dallas)
Guttu Sai Abhishek (Indian Institute of Technology, Bombay)

Hi, this is Abhishek. I completed my graduation from CSE, IIT Bombay, India. Have a good day!

Aakriti Lnu (Indian Institute of Technology Bombay)
Aakriti Lnu

Final year Computer Science undergrad at IIT Bombay

Ganesh Ramakrishnan (Indian Institute of Technology Bombay, Indian Institute of Technology Bombay)
Alexandre Evfimievski (International Business Machines)
Lucian Popa (International Business Machines)
Rishabh Iyer (University of Texas, Dallas)

Bio: Prof. Rishabh Iyer is currently an Assistant Professor at the University of Texas, Dallas, where he leads the CARAML Lab. He is also a Visiting Assistant Professor at the Indian Institute of Technology, Bombay. He completed his Ph.D. in 2015 from the University of Washington, Seattle. He is excited in making ML more efficient (both computational and labeling efficiency), robust, and fair. He has received the best paper award at Neural Information Processing Systems (NeurIPS/NIPS) in 2013, the International Conference of Machine Learning (ICML) in 2013, and an Honorable Mention at CODS-COMAD in 2021. He has also won a Microsoft Research Ph.D. Fellowship, a Facebook Ph.D. Fellowship, and the Yang Award for Outstanding Graduate Student from the University of Washington.

More from the Same Authors