Timezone: »

A Causal Framework to Quantify Robustness of Mathematical Reasoning with Language Models
Alessandro Stolfo · Zhijing Jin · Kumar Shridhar · Bernhard Schölkopf · Mrinmaya Sachan

We have recently witnessed a number of impressive results on hard mathematical reasoning problems with large language models (LLMs). At the same time, the robustness of these models has also been called into question.Building on the idea of behavioral testing, we propose a novel framework, which pins down the causal effect of each factor in the input, e.g., the surface form of the problem text, the operands, and math operators, on the output. By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of LLMs in terms of robustness and sensitivity to direct interventions in the input space. We apply our framework on a test bed of bivariate math word problems.Our analysis shows that robustness does not appear to continuously improve as a function of scale, but that the recent LLM, GPT-3-Instruct (175B), achieves a dramatic improvement in both robustness and sensitivity, compared to all other GPT variants.

Author Information

Alessandro Stolfo (ETH Zürich)
Zhijing Jin (ETH Zürich)
Kumar Shridhar (ETH Zurich)
Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen)
Mrinmaya Sachan (ETH Zurich)

More from the Same Authors