Timezone: »
SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network. It had been shown to solve Sudoku puzzles visually from examples of puzzle digit images, and was heralded as an impressive achievement towards the longstanding AI goal of combining pattern recognition with logical reasoning. In this paper, we clarify SATNet's capabilities by showing that in the absence of intermediate labels that identify individual Sudoku digit images with their logical representations, SATNet completely fails at visual Sudoku (0% test accuracy). More generally, the failure can be pinpointed to its inability to learn to assign symbols to perceptual phenomena, also known as the symbol grounding problem, which has long been thought to be a prerequisite for intelligent agents to perform real-world logical reasoning. We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general. Naive applications of SATNet on this test lead to performance worse than that of models without logical reasoning capabilities. We report on the causes of SATNet’s failure and how to prevent them.
Author Information
Oscar Chang (Columbia University)
Lampros Flokas (Columbia University)
Hod Lipson (Columbia University)
Michael Spranger (Sony)
More from the Same Authors
-
2022 : The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL »
Badr AlKhamissi · Muhammad ElNokrashy · Michael Spranger -
2022 : MocoSFL: enabling cross-client collaborative self-supervised learning »
Jingtao Li · Lingjuan Lyu · Daisuke Iso · Chaitali Chakrabarti · Michael Spranger -
2022 : Feasible and Desirable Counterfactual Generation by Preserving Human Defined Constraints »
Homayun Afrabandpey · Michael Spranger -
2022 : The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL »
Badr AlKhamissi · Muhammad ElNokrashy · Michael Spranger -
2022 Poster: Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling »
Junyuan Hong · Lingjuan Lyu · Jiayu Zhou · Michael Spranger -
2021 Poster: Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Georgios Piliouras -
2020 Poster: No-Regret Learning and Mixed Nash Equilibria: They Do Not Mix »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Thanasis Lianeas · Panayotis Mertikopoulos · Georgios Piliouras -
2020 Spotlight: No-Regret Learning and Mixed Nash Equilibria: They Do Not Mix »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Thanasis Lianeas · Panayotis Mertikopoulos · Georgios Piliouras -
2020 Expo Talk Panel: Hypotheses Generation for Applications in Biomedicine and Gastronomy »
Michael Spranger · Kosuke Aoki -
2019 Poster: Efficiently avoiding saddle points with zero order methods: No gradients required »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Georgios Piliouras -
2019 Poster: Poincaré Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Georgios Piliouras -
2019 Spotlight: Poincaré Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Georgios Piliouras