Timezone: »
Predicting the execution time of computer programs is an important but challenging problem in the community of computer systems. Existing methods require experts to perform detailed analysis of program code in order to construct predictors or select important features. We recently developed a new system to automatically extract a large number of features from program execution on sample inputs, on which prediction models can be constructed without expert knowledge. In this paper we study the construction of predictive models for this problem. We propose the SPORE (Sparse POlynomial REgression) methodology to build accurate prediction models of program performance using feature data collected from program execution on sample inputs. Our two SPORE algorithms are able to build relationships between responses (e.g., the execution time of a computer program) and features, and select a few from hundreds of the retrieved features to construct an explicitly sparse and non-linear model to predict the response variable. The compact and explicitly polynomial form of the estimated model could reveal important insights into the computer program (e.g., features and their non-linear combinations that dominate the execution time), enabling a better understanding of the program’s behavior. Our evaluation on three widely used computer programs shows that SPORE methods can give accurate prediction with relative error less than 7% by using a moderate number of training data samples. In addition, we compare SPORE algorithms to state-of-the-art sparse regression algorithms, and show that SPORE methods, motivated by real applications, outperform the other methods in terms of both interpretability and prediction accuracy.
Author Information
Ling Huang (Intel)
Jinzhu Jia (UC Berkeley)
Bin Yu
Byung-Gon Chun (Intel)
Petros Maniatis (Intel)
Mayur Naik (Intel)
More from the Same Authors
-
2014 Poster: Large-Margin Convex Polytope Machine »
Alex Kantchelian · Michael C Tschantz · Ling Huang · Peter Bartlett · Anthony D Joseph · J. D. Tygar -
2009 Poster: Lower bounds on minimax rates for nonparametric regression with additive sparsity and smoothness »
Garvesh Raskutti · Martin J Wainwright · Bin Yu -
2009 Spotlight: Lower bounds on minimax rates for nonparametric regression with additive sparsity and smoothness »
Garvesh Raskutti · Martin J Wainwright · Bin Yu -
2009 Poster: A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers »
Sahand N Negahban · Pradeep Ravikumar · Martin J Wainwright · Bin Yu -
2009 Oral: A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers »
Sahand N Negahban · Pradeep Ravikumar · Martin J Wainwright · Bin Yu -
2008 Poster: Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images »
Pradeep Ravikumar · Vincent Vu · Bin Yu · Thomas Naselaris · Kendrick Kay · Jack Gallant -
2008 Spotlight: Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images »
Pradeep Ravikumar · Vincent Vu · Bin Yu · Thomas Naselaris · Kendrick Kay · Jack Gallant -
2008 Poster: Spectral Clustering with Perturbed Data »
Ling Huang · Donghui Yan · Michael Jordan · Nina Taft -
2008 Poster: Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of \ell_1-regularizedMLE »
Pradeep Ravikumar · Garvesh Raskutti · Martin J Wainwright · Bin Yu -
2008 Spotlight: Spectral Clustering with Perturbed Data »
Ling Huang · Donghui Yan · Michael Jordan · Nina Taft -
2006 Poster: Distributed PCA and Network Anomaly Detection »
Ling Huang · XuanLong Nguyen · Minos Garofalakis · Michael Jordan · Anthony D Joseph · Nina Taft