Skip to yearly menu bar Skip to main content


Tutorial

Machine Learning for Theorem Proving

Zhangir Azerbayev · Emily First · Albert Q. Jiang · Kaiyu Yang · Anima Anandkumar · Noah Goodman · Alex Sanchez-Stern · Dawn Song · Sean Welleck

Hall B2 (level 1)
[ ] [ Project Page ]
Mon 11 Dec 11:45 a.m. PST — 2:15 p.m. PST

Abstract:

Machine learning, especially large language models (LLMs), has shown promise in proving formal theorems using proof assistants such as Coq, Isabelle, and Lean. Theorem proving is an important challenge for machine learning: Formal proofs are computer programs whose correctness can be verified. Therefore, theorem proving is a form of code generation with rigorous evaluation and no room for the model to hallucinate, opening up a new avenue for addressing LLMs’ flaws in factuality. Despite its potential, learning-based theorem proving has significant entry barriers, primarily due to the steep learning curve for proof assistants. This tutorial aims to bridge this gap and make theorem proving accessible to researchers with a general machine learning background. To that end, our presentation will contextualize theorem proving from a machine learning perspective and demonstrate how to develop LLMs for theorem proving, using newly available open-source tools that provide interfaces to proof assistants without requiring in-depth knowledge of their internals. Furthermore, we will cover advanced topics and open problems in learning-based theorem proving, including its synergies with natural language processing and software verification. Throughout the presentation, we will highlight several conceptual themes recurring in theorem proving that are also critical for machine learning, such as mathematical reasoning, code generation, and hallucination prevention. The panel will complement the presentation through a broader discussion of related topics such as trustworthy machine learning, LLMs for code, reasoning, and program synthesis.

Chat is not available.