Timezone: »
We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate their systematic generalization abilities on a logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate natural language proofs. We test the generated proofs for logical consistency, along with the accuracy of the final inference. We observe length-generalization issues when evaluated on longer-than-trained sequences. However, we observe TLMs improve their generalization performance after being exposed to longer, exhaustive proofs. In addition, we discover that TLMs are able to generalize better using backward-chaining proofs compared to their forward-chaining counterparts, while they find it easier to generate forward chaining proofs. We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs. This suggests that Transformers have efficient internal reasoning strategies that are harder to interpret. These results highlight the systematic generalization behavior of TLMs in the context of logical reasoning, and we believe this work motivates deeper inspection of their underlying reasoning strategies.
Author Information
Nicolas Gontier (Mila, Polytechnique Montréal)
Koustuv Sinha (McGill University / Mila / FAIR)

Research Scientist at Meta AI NYC. PhD from McGill University / Mila, advised by Dr Joelle Pineau. I primarily work on logical language understanding, systematic generalization, logical graphs and dialog systems.
Siva Reddy (McGill University)
Chris Pal (Montreal Institute for Learning Algorithms, École Polytechnique, Université de Montréal)
More from the Same Authors
-
2021 : Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning »
Nan Rosemary Ke · Aniket Didolkar · Sarthak Mittal · Anirudh Goyal · Guillaume Lajoie · Stefan Bauer · Danilo Jimenez Rezende · Yoshua Bengio · Chris Pal · Michael Mozer -
2021 : Beyond Target Networks: Improving Deep $Q$-learning with Functional Regularization »
Alexandre Piche · Joseph Marino · Gian Maria Marconi · Valentin Thomas · Chris Pal · Mohammad Emtiyaz Khan -
2022 : Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models »
Vikram Voleti · Chris Pal · Adam Oberman -
2022 : Implicit Offline Reinforcement Learning via Supervised Learning »
Alexandre Piche · Rafael Pardinas · David Vazquez · Igor Mordatch · Igor Mordatch · Chris Pal -
2022 : A General-Purpose Neural Architecture for Geospatial Systems »
Martin Weiss · Nasim Rahaman · Frederik Träuble · Francesco Locatello · Alexandre Lacoste · Yoshua Bengio · Erran Li Li · Chris Pal · Bernhard Schölkopf -
2022 Poster: Attention-based Neural Cellular Automata »
Mattie Tesfaldet · Derek Nowrouzezahrai · Chris Pal -
2022 Poster: Neural Attentive Circuits »
Martin Weiss · Nasim Rahaman · Francesco Locatello · Chris Pal · Yoshua Bengio · Bernhard Schölkopf · Erran Li Li · Nicolas Ballas -
2022 Poster: MCVD - Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation »
Vikram Voleti · Alexia Jolicoeur-Martineau · Chris Pal -
2019 Workshop: Retrospectives: A Venue for Self-Reflection in ML Research »
Ryan Lowe · Yoshua Bengio · Joelle Pineau · Michela Paganini · Jessica Forde · Shagun Sodhani · Abhishek Gupta · Joel Lehman · Peter Henderson · Kanika Madan · Koustuv Sinha · Xavier Bouthillier -
2019 Poster: Real-Time Reinforcement Learning »
Simon Ramstedt · Chris Pal -
2017 : Competition III: The Conversational Intelligence Challenge »
Mikhail Burtsev · Ryan Lowe · Iulian Vlad Serban · Yoshua Bengio · Alexander Rudnicky · Alan W Black · Shrimai Prabhumoye · Artem Rodichev · Nikita Smetanin · Denis Fedorenko · CheongAn Lee · EUNMI HONG · Hwaran Lee · Geonmin Kim · Nicolas Gontier · Atsushi Saito · Andrey Gershfeld · Artem Burachenok