Timezone: »
Automatic optimization for tensor programs becomes increasingly important as we deploy deep learning in various environments, and efficient optimization relies on a rich search space and effective search. Most existing efforts adopt a search space which lacks the ability to efficiently enable domain experts to grow the search space. This paper introduces MetaSchedule, a domain-specific probabilistic programming language abstraction to construct a rich search space of tensor programs. Our abstraction allows domain experts to analyze the program, and easily propose stochastic choices in a modular way to compose program transformation accordingly. We also build an end-to-end learning-driven framework to find an optimized program for a given search space. Experimental results show that MetaSchedule can cover the search space used in the state-of-the-art tensor program optimization frameworks in a modular way. Additionally, it empowers domain experts to conveniently grow the search space and modularly enhance the system, which brings 48% speedup on end-to-end deep learning workloads.
Author Information
Junru Shao (OctoML)
Xiyou Zhou (OctoML)
Siyuan Feng (Shanghai Jiao Tong University)
Bohan Hou (School of Computer Science, Carnegie Mellon University)

I am a second-year Ph.D. student in Computer Science at Carnegie Mellon University, where I am very fortunate to be advised by Prof. Tianqi Chen. I am a member of Catalyst. My research interests lie in domain-specific system design and implementation, especially machine learning systems and deep learning compilers. Prior to coming to CMU, I received my Bachelor's degree in ACM Honors Class of Shanghai Jiao Tong University, where I worked under the supervision of Prof. Weinan Zhang and Prof. Yong Yu.
Ruihang Lai (Carnegie Mellon University)
Hongyi Jin (School of Computer Science, Carnegie Mellon University)
Wuwei Lin (OctoML)
Masahiro Masuda
Cody Hao Yu (Amazon Web Services)
Tianqi Chen (Carnegie Mellon University)
More from the Same Authors
-
2021 : TenSet: A Large-scale Program Performance Dataset for Learned Tensor Compilers »
Lianmin Zheng · Ruochen Liu · Junru Shao · Tianqi Chen · Joseph Gonzalez · Ion Stoica · Ameer Haj-Ali