Skip to yearly menu bar Skip to main content


Poster
in
Workshop: MATH-AI: The 3rd Workshop on Mathematical Reasoning and AI

Llemma: An Open Language Model For Mathematics

Zhangir Azerbayev · Hailey Schoelkopf · Keiran Paster · Marco Dos Santos · Stephen McAleer · Albert Q. Jiang · Jia Deng · Stella Biderman · Sean Welleck

Keywords: [ pretraining ] [ language models ]


Abstract:

We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known openly released models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.

Chat is not available.