Skip to yearly menu bar Skip to main content


Poster

OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step

Owen Dugan · Donato Jiménez-Benetó · Charlotte Loh · Zhuo Chen · Rumen Dangovski · Marin Soljacic

East Exhibit Hall A-C #3001
[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. To achieve accurate calculations, language model systems often enable LLMs to generate code for arithmetic operations. However, this approach compromises speed and security and, if finetuning is involved, risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of an LLM to control a symbolic architecture which performs arithmetic. Our implementation using Llama 3 8B Instruct with OccamNet as a symbolic model (OccamLlama) achieves 100% accuracy on single arithmetic operations (+, -, *, /, sin, cos, log, exp, sqrt), outperforming GPT 4o and on par with GPT 4o using a code interpreter. OccamLlama also outperforms both Llama 3 8B Instruct and GPT 3.5 Turbo on multistep reasoning problems involving challenging arithmetic, thus enabling small LLMs to match the arithmetic performance of even much larger models. Our code is available at https://anonymous.4open.science/r/OccamLlama.

Live content is unavailable. Log in and register to view live content