A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions
Mejbah Alam · Justin Gottschlich · Nesime Tatbul · Javier Turek · Tim Mattson · Abdullah Muzahid

Thu Dec 12th 05:00 -- 07:00 PM @ East Exhibition Hall B + C #120

The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We present AutoPerf – a novel approach to automate regression testing that utilizes three core techniques: (i) zero-positive learning, (ii) autoencoders, and (iii) hardware telemetry. We demonstrate AutoPerf’s generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs. On average, AutoPerf exhibits 4% profiling overhead and accurately diagnoses more performance bugs than prior state-of-the-art approaches. Thus far, AutoPerf has produced no false negatives.

Author Information

Mejbah Alam (Intel Labs)
Justin Gottschlich (Intel Labs)
Nesime Tatbul (Intel Labs and MIT)
Javier Turek (Intel Labs)
Tim Mattson (Intel)

Tim Mattson is a parallel programmer obsessed with every variety of science (Ph.D. Chemistry, UCSC, 1985). He is a senior principal engineer in Intel’s parallel computing lab at Intel. Tim has been with Intel since 1993 and has worked with brilliant people on great projects including: (1) the first TFLOP computer, (2) the OpenMP and OpenCL programming languages, (3) two different research processors (Intel's TFLOP chip and the 48 core SCC), (4) Data management systems (Polystore systems and Array-based storage engines), and (5) the GraphBLAS API for expressing graph algorithms as sparse linear algebra.

Abdullah Muzahid (Texas A&M University)

More from the Same Authors