ASAP: an Agentic Solution to Auto-optimize Performance of Large-Scale LLM Training
Abstract
Optimizing large-language model (LLM) training on distributed domain-specific accelerator systems presents significant challenges due to its complex optimization space. Existing optimization methods, however, rely on time-consuming manual tuning or resource-intensive black-box searches, which struggle to keep pace with the rapidly evolving LLM domain, leading to slow development and underutilized resources. To address this, we introduce ASAP, an Agentic Solution to Auto-optimize Performance of Large-Scale LLM Training. It is a multi-agent system, featuring Coordinator, Analyzer, and Proposal agents, which integrates LLM reasoning with insights from performance profiling tools, roofline analysis, and a knowledge base of best practices and successful past optimizations. Our proposed design can automate the diagnosis of performance bottlenecks and intelligently generate optimized sharding configurations with reasoning, thus effectively improving the efficiency of distributed LLM training. In our experiment, the agent's optimized sharding configuration independently reduced compute time by up to 28% and increased throughput by up to 1.43x. When combined with insights from our engineers, the total throughput increased by up to 258.27%. This approach promises to significantly reduce manual effort, shorten iteration cycles, and enhance accelerator utilization, offering a scalable and explainable methodology for AI-assisted performance engineering in large-scale machine learning.