Skip to yearly menu bar Skip to main content


Poster
in
Workshop: MATH-AI: The 3rd Workshop on Mathematical Reasoning and AI

Augmenting Large Language Models with Symbolic Rule Learning for Robust Numerical Reasoning

Hadeel Al-Negheimish · Pranava Madhyastha · Alessandra Russo

Keywords: [ LLMs ] [ Numerical Reasoning ] [ ASP ] [ neuro-symbolic ] [ Symbolic Learning ] [ MRC ] [ QA ]


Abstract:

While some prompting strategies have been proposed to elicit reasoning in Large Language Models (LLMs), numerical reasoning for machine reading comprehension remains a difficult challenge. We propose a neuro-symbolic approach that uses in-context learning with LLMs to decompose complex questions into simpler ones and symbolic learning methods to learn rules for recomposing partial answers. We evaluate it on different numerical subsets of the DROP benchmark; results show that it is competitive with DROP-specific SOTA models and significantly improves results over pure LLM prompting methods. Our approach boasts data efficiency, since it does not involve any additional training or fine-tuning. Additionally, the neuro-symbolic approach facilitates robust numerical reasoning; the model is faithful to the passage it has been presented, and provides interpretable and verifiable reasoning traces.

Chat is not available.