Skip to yearly menu bar Skip to main content


Poster

EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization

Dong HUANG · Jianbo Dai · Han Weng · Puzhen Wu · Yuhao QING · Heming Cui · Zhijiang Guo · Jie Zhang

East Exhibit Hall A-C #4601
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Large language models (LLMs) have shown remarkable progress in code generation, but their generated code often suffers from inefficiency, resulting in longer execution times and higher memory consumption. To address this issue, we propose EffiLearner, a self-optimization framework that utilizes execution overhead profiles to improve the efficiency of LLM-generated code. EffiLearner first generates code using an LLM, then executes it locally to capture execution time and memory usage profiles. These profiles are fed back to the LLM, which then revises the code to reduce overhead. To evaluate the effectiveness of EffiLearner, we conduct extensive experiments on EffiBench and two commonly used code generation benchmarks with 16 open-source and 6 closed-source models. Our evaluation results demonstrate that through iterative self-optimization, EffiLearner significantly enhances the efficiency of LLM-generated code. For example, the execution time (ET) of StarCoder2-15B for the EffiBench decreases from 0.93 (s) to 0.12 (s) which reduces 87.1\% execution time requirement compared with the initial code. The total memory usage (TMU) of StarCoder2-15B also decreases from 22.02 (Mbs) to 2.03 (Mbs), which decreases 90.8\% total memory consumption during the execution process.

Live content is unavailable. Log in and register to view live content