Timezone: »
The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like.
Author Information
Steven Piantadosi (University of California-Berkeley)
Felix Hill (Deepmind)
More from the Same Authors
-
2021 : Task-driven Discovery of Perceptual Schemas for Generalization in Reinforcement Learning »
Wilka Carvalho · Andrew Lampinen · Kyriacos Nikiforou · Felix Hill · Murray Shanahan -
2022 : Transformers generalize differently from information stored in context vs in weights »
Stephanie Chan · Ishita Dasgupta · Junkyung Kim · Dharshan Kumaran · Andrew Lampinen · Felix Hill -
2022 : Collaborating with language models for embodied reasoning »
Ishita Dasgupta · Christine Kaeser-Chen · Kenneth Marino · Arun Ahuja · Sheila Babayan · Felix Hill · Rob Fergus -
2022 : Collaborating with language models for embodied reasoning »
Ishita Dasgupta · Christine Kaeser-Chen · Kenneth Marino · Arun Ahuja · Sheila Babayan · Felix Hill · Rob Fergus -
2023 Poster: The Transient Nature of Emergent In-context Learning in Transformers »
Aaditya Singh · Stephanie Chan · Ted Moskovitz · Erin Grant · Andrew Saxe · Felix Hill -
2022 Poster: Data Distributional Properties Drive Emergent In-Context Learning in Transformers »
Stephanie Chan · Adam Santoro · Andrew Lampinen · Jane Wang · Aaditya Singh · Pierre Richemond · James McClelland · Felix Hill -
2022 Poster: Semantic Exploration from Language Abstractions and Pretrained Representations »
Allison Tam · Neil Rabinowitz · Andrew Lampinen · Nicholas Roy · Stephanie Chan · DJ Strouse · Jane Wang · Andrea Banino · Felix Hill -
2021 Poster: Attention over Learned Object Embeddings Enables Complex Visual Reasoning »
David Ding · Felix Hill · Adam Santoro · Malcolm Reynolds · Matt Botvinick -
2021 Poster: Multimodal Few-Shot Learning with Frozen Language Models »
Maria Tsimpoukelli · Jacob L Menick · Serkan Cabi · S. M. Ali Eslami · Oriol Vinyals · Felix Hill -
2021 Poster: Towards mental time travel: a hierarchical memory for reinforcement learning agents »
Andrew Lampinen · Stephanie Chan · Andrea Banino · Felix Hill -
2021 Oral: Attention over Learned Object Embeddings Enables Complex Visual Reasoning »
David Ding · Felix Hill · Adam Santoro · Malcolm Reynolds · Matt Botvinick -
2018 Poster: Neural Arithmetic Logic Units »
Andrew Trask · Felix Hill · Scott Reed · Jack Rae · Chris Dyer · Phil Blunsom -
2017 : Panel Discussion »
Felix Hill · Olivier Pietquin · Jack Gallant · Raymond Mooney · Sanja Fidler · Chen Yu · Devi Parikh -
2017 : Grounded Language Learning in a Simulated 3D World »
Felix Hill