Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL) Workshop

Enhancing Understanding in Generative Agents through Active Inquiring

Jiaxin Ge · Kaiya Zhao · Manuel Cortes · Jovana Kondic · Shuying Luo · Michelangelo Naim · Andrew Ahn · Guangyu Robert Yang

Keywords: [ Generative Agents ] [ LLM ]


Abstract:

As artificial intelligence advances, Large Language Models (LLMs) have evolved beyond being just tools, becoming more like human-like agents that can converse, reflect, plan, and set goals. However, these models still struggle with open-ended question answering and often fail to understand unfamiliar scenarios quickly. To address this, we ask: how do humans manage strange situations so effectively? We believe it’s largely due to our natural instinct for curiosity and a built-in desire to predict the future and seek explanations when those predictions don’t align with reality. Unlike humans, LLMs typically accept information passively without an inherent desire to question or doubt, which could be why they struggle to understand new situations.Focusing on this, our study explores the possibility of equipping LLM-agents with human-like curiosity. Can these models move from being passive processors to active seekers of understanding, reflecting human behaviors? And can this adaptation benefit them as it does humans? To explore this, we introduce an innovative experimental framework where generative agents navigate through strange and unfamiliar situations, and their understanding is then assessed through interview questions about those situations. Initial results show notable improvements when models are equipped with traits of surprise and inquiry compared to those without. This research is a step towards creating more human-like agents and highlights the potential benefits of integrating human-like traits in models.

Chat is not available.