Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Associative Memory & Hopfield Networks in 2023

In-Context Exemplars as Clues to Retrieving from Large Associative Memory

Jiachen Zhao


Abstract:

Recently, large language models (LLMs) have made remarkable progress in natural language processing (NLP). The most representative ability of LLMs is in-context learning (ICL), which enables LLMs to learn patterns from in-context exemplars without training. However, there remains limited intuition for how in-context learning works. In this paper, we present a novel perspective on prompting LLMs by conceptualizing it as contextual retrieval from a model of associative memory, which can be biologically plausible. We establish a theoretical interpretation of ICL based on an extension of the framework of Hopfield Networks. Based on our theory, we further analyze how in-context exemplars influence the performance of ICL. Our study sheds new light on the mechanism of ICL by connecting it to memory retrieval, with potential implications for advancing the understanding of LLMs.

Chat is not available.