Skip to yearly menu bar Skip to main content


Poster

COLD: Causal reasOning in cLosed Daily activities

Abhinav Joshi · areeb ahmad · Ashutosh Modi


Abstract:

Large Language Models (LLMs) have shown state of the art performance in a variety of tasks including arithmetic and reasoning; however, to gauge the intellectual capabilities of LLMs, causal reasoning has become a reliable proxy for validating a general understanding of the mechanics and intricacies of the world similar to humans. Previous works in natural language processing (NLP) have either focused on open-ended causal reasoning via causal commonsense reasoning (CCR) or framed a symbolic representation-based question answering for theoretically backed-up analysis via a causal inference engine. The former adds an advantage of real-world grounding but lacks theoretically backed-up analysis/validation, whereas the latter is far from real-world grounding. In this work, we bridge this gap by proposing the COLD (Causal reasOning in cLosed Daily activities) framework, which is built upon human understanding of daily real-world activities to reason about the causal nature of events. We show that the proposed framework facilitates the creation of enormous causal queries (~ 8 million) and comes close to the mini-turing test, simulating causal reasoning to evaluate the understanding of a daily real-world task. We evaluate multiple LLMs on the created causal queries and find that causal reasoning is challenging even for activities trivial for humans. We further explore (causal reasoning abilities of LLMs) using the backdoor criterion to determine the causal strength between events.

Live content is unavailable. Log in and register to view live content