Modeling Open World Cognition as On-Demand Synthesis of Probabilistic Models
Abstract
When faced with novel situations, people can marshal relevant considerations from a wide range of background knowledge and use these for inference and prediction. How do we draw in globally relevant information and reason over it coherently? We explore the hypothesis that people reason by constructing structured but small, ad-hoc mental models on the fly, tailored to novel situations. We propose a computational implementation of this idea -- a ``Model Synthesis Architecture'' (MSA) -- using language models to parameterize global, relevance-based retrieval of variables, and probabilistic programs to implement bespoke, coherent world models. We evaluate our MSA, along with ablations and baselines, as a model of human judgments across a sequence of experiments that requires progressively more open-ended and open-world reasoning about situations described in natural language. Across all experiments, the MSA captures human judgments, and outperforms the base LM alone – suggesting that MSAs offer a path towards capturing coherent human reasoning in open-ended domains.