Evaluating Program Semantics Reasoning with Type Inference in System $F$
Yifeng He · Luning Yang · Christopher Gonzalo · Hao Chen
Abstract
Large Language Models (LLMs) are increasingly integrated into the software engineering ecosystem.Their test-time compute reasoning capabilities promise significant potential in understanding program logic and semantics beyond mere token recognition. However, current benchmarks evaluating reasoning LLMs for code lack a formal, program-centric deductive framework for the soundness of evaluation, incompetent in assessing of whether models genuinely reason about program semantics or merely associate superficial connections between natural language and code tokens. To bridge this gap, we introduce TF-Bench, a benchmark designed to evaluate LLM reasoning based on type inference in System F, a task we refer to as *program semantics reasoning*. By employing verified transformations to remove semantically irrelevant natural language,we construct TF-Bench_pure, a purely semantics-driven variant of TF-Bench. Our analysis reveals substantial limitations in state-of-the-art LLMs, with the best-performing LLM (Claude-3.7-sonnet) achieving only $55.85\%$ accuracy on TF-Bench_pure. Additionally, we propose two novel metrics to assess the robustness and effectiveness of extended reasoning,underscoring the critical limitation in current LLM capabilities and highlighting essential directions for future research.
Successful Page Load