Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Memory in Artificial and Real Intelligence (MemARI)

Learning to Reason and Memorize with Self-Questioning

Jack Lanchantin · Shubham Toshniwal · Jason E Weston · arthur szlam · Sainbayar Sukhbaatar


Abstract:

Large language models have been shown to struggle with limited context memory and multi-step reasoning [1]. We propose a simple method for solving both of these problems by allowing the model to ask questions and answer them. Unlike recent scratchpad approaches, the model can deviate from the input context at any time for self-questioning. This allows the model to recall information and perform reasoning on the fly as it reads the context, thus extending its memory and enabling multi-step reasoning. Our experiments on two synthetic tasks demonstrate that our method can successfully generalize to more complicated instances from their training setup by performing self-questioning at inference time.

Chat is not available.