Timezone: »
Pragmatics – the aspects of language use that involve reasoning about context and other agents’ goals and belief states – has traditionally been treated as the “wastebasket” of language research (Bar-Hillel 1971), posing a challenge for both cognitive theories and artificial intelligence systems. Ideas from theoretical linguistics have inspired computational applications, such as in referential expression generation (Krahmer and van Deemter, 2012) or computational models of dialogue and recognition of speech or dialogue acts (Bunt and Black, 2000; Jurafsky, 2006; Ginzburg and Fernández, 2010; Bunt, 2016). But only recently, powerful artificial models based on neural or subsymbolic architectures have come into focus that generate or interpret language in pragmatically sophisticated and potentially open-ended ways (Golland et al. 2010, Andreas and Klein 2016, Monroe et al. 2017, Fried et al. 2018), building upon simultaneous advances in the cognitive science of pragmatics (Franke 2011, Frank and Goodman 2012). However, such models still fall short of human pragmatic reasoning in several important aspects. For example, existing approaches are often tailored to, or even trained to excel on, a specific pragmatic task (e.g., Mao et al. (2016) on discriminatory object description), leaving human-like task flexibility unaccounted for. It also remains largely underexplored how pragmatics connects to domain-general reasoning, how it may be efficiently implemented, and how it may arise over the course of learning and evolution. In this workshop, we aim to bring together researchers from Cognitive Science, Linguistics, and Machine Learning to think critically about the next generation of artificial pragmatic agents and theories of human pragmatic reasoning.
Mon 5:55 a.m. - 6:00 a.m.
|
Opening remarks
SlidesLive Video » |
Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman 🔗 |
Mon 6:00 a.m. - 6:20 a.m.
|
The Neurobiology of Pragmatics
(
Invited talk
)
SlidesLive Video » In this presentation I will discuss recent insights into both the time course of pragmatic processing and the key neural infrastructure for inferring speaker meaning from coded meaning. I will show why mirror neurons are not able to handle pragmatic information. In addition, I will present evidence for the role of the Theory of Mind (ToM) network in processing of pragmatic information. |
Peter Hagoort 🔗 |
Mon 6:20 a.m. - 6:30 a.m.
|
Q&A
|
🔗 |
Mon 6:30 a.m. - 6:40 a.m.
|
Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes
(
Contributed talk
)
link »
SlidesLive Video » Empathy is a complex cognitive ability based on the reasoning of others' affective states. In order to better understand others and express stronger empathy in dialogues, we argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion and (ii) reflecting those specific words in the response generation. However, existing approaches for recognizing emotion cause words in text require sub-utterance level annotations, which is demanding. Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with only emotion labels. We show our approach improves best performing dialogue agents on generating more focused empathetic responses in terms of both automatic and human evaluation. |
Hyunwoo Kim · Byeongchang Kim · Gunhee Kim 🔗 |
Mon 6:40 a.m. - 6:45 a.m.
|
Q&A
|
🔗 |
Mon 6:45 a.m. - 6:55 a.m.
|
Lexical Pragmatics in the Wild: The Case of Complement Coercion
(
Contributed talk
)
link »
SlidesLive Video » We inspect complement coercion sentences (she finished the coffee or he started a book) as a case study for modeling open-ended pragmatic interpretation. Existing computational work treats complement coercion interpretation as a task of choosing a single best-fit verb (she finished drinking the coffee). We instead present crowdsourcing and modeling data that supports broadening the predicted classes to better capture naturalistic interpretation. |
Frederick Gietz · Barend Beekhuizen 🔗 |
Mon 6:55 a.m. - 7:00 a.m.
|
Q&A
|
🔗 |
Mon 7:00 a.m. - 7:20 a.m.
|
Human Production Strategies for Neural Language Generation
(
Invited talk
)
SlidesLive Video » Progress on language generation has experienced a huge boost with the advent of large models trained on huge amounts of text. However, this kind of language modelling will only take us that far. Most natural language use is driven by communicative goals and is often grounded both in the conversational context and in extralinguistic information. Can we take inspiration from human production strategies in situated environments to drive forward natural language generation models? I will argue that yes, we can, and present a few examples of recent and ongoing research carried out within my group that follow this research programme. |
Raquel Fernández 🔗 |
Mon 7:20 a.m. - 7:30 a.m.
|
Q&A
|
🔗 |
Mon 7:30 a.m. - 8:00 a.m.
|
Break / Meet-and-greet #1 link » | 🔗 |
Mon 8:00 a.m. - 9:00 a.m.
|
Panel
(
Discussion panel
)
SlidesLive Video » |
🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
Unveiling the Meaning Through Emotional Context
(
Poster
)
link »
Generating emotional text which is highly adaptive to different life scenarios is an important step towards understanding the role of context in generative language models. While large language models with billions of parameters (e.g. GPT-3) are able to produce coherent text indistinguishable from human generated text, sometimes they fail to generate contextually relevant sentences with anticipated sentiment tone. Main challenge in generating the text with required emotional context is a complexity of human emotions. Since variability of emotions makes it difficult to recognize the emotion in text by humans without understanding the context, conditional text generation controlling sentiment and context helps to prevent contextual confusion. In this paper we suggest exploring how generative language models improve the meaning of the generated text by controlling the sentiment in text generation and providing broader context to generated scenarios within given situation. We demonstrated how existing research in sentiment analysis, style-transfer and controllable text generation can be used in future research to understand the meaning of generated language through emotional context. |
Tatiana Botskina 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
Analysing Human Strategies of Information Transmission as a Function of Discourse Context
(
Poster
)
link »
Speakers are thought to use rational information transmission strategies for efficient communication; for example, they keep the information density of their sentences uniform over the course of written texts (Genzel and Charniak, 2003; 2003)—especially so within coherent contextual units, such as paragraphs. In this work, we test whether, and within which contextual units, speakers adhere to the principle of uniform information density (Jaeger and Levy, 2007) in written monologue as well as in written and spoken task-oriented dialogue. Using a pre-trained Transformer-based language model, which provides more robust measurements than the n-gram models used in prior work, we confirm that speakers adhere to the principle in newspaper articles and present new evidence that they also do in written cooperative reference games as well as in spoken dialogues involving instruction giving and following. Because patterns of information transmission vary within different contextual units, we then use the context window of our language model to estimate information density as a function of the relevant utterance context; this was never explicitly measured in previous related work. We find that, when context is explicitly factored in, speakers transmit information at a stable rate in newspaper articles but that this rate decreases in spoken open domain and written task-oriented dialogues. We suggest that a more faithful model of communication should include production efforts and goal-oriented rewards. Our hope is that this line of work will inform the development of dialogue generation models that organise the transmission of information in a more human-like fashion. |
Mario Giulianelli · Arabella Sinclair 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
The role of joint utility and pragmatic reasoning in cooperative communication
(
Poster
)
link »
Humans are able to communicate in sophisticated ways with only sparse signals, especially when cooperating. Two parallel theoretical perspectives on cooperative communication emphasize pragmatic reasoning and joint utility mechanisms to help solve ambiguity. For the current study, we collected behavioral data which tested how humans select ambiguous signals in a cooperative grid world task. The results provide support for a joint utility reasoning mechanism. We then compared human strategies to predictions from Rational Speech Acts (RSA), an established model of language pragmatics. |
Yiling Yun · Stephanie Stacy · Tao Gao 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
A model of contextual representations and their role for linguistic prediction
(
Poster
)
link »
The predictability value of a word corresponds to its frequency of use during a cloze task, and it has been correlated with processing time and also with the N400 component in EEG studies. Most importantly, the predictability value has been used extensively in the literature, even though we still know almost nothing about how a cloze task is performed.Using an interdisciplinary perspective regarding the nature of linguistic prediction and the kinds of cognitive processes involved therein, I developed a new theoretically-driven computational approach that revisits the derivation of the predictability score. Empirical results in psycholinguistics and neurolinguistics do not support the Strong Prediction View, and they tend to show that semantics and syntax are processed independently, and that the semantic stream has precedence over the syntactic stream. In this poster, I present a model of linguistic prediction that is compatible with these results in which I differentiate between the contribution coming from different levels of semantic granularity and the one coming from the coordination aspect of linguistic interaction.In this model, a linguistic prediction is derived from the combination of the contributions from four kinds of sentence-level representations. Each kind of representation triggers an activation signal that spreads throughout a conceptual space where the level of activation of any concepts at a particular time represents the degree by which they are triggered by the information retrieved from the truncated sentence and the global context. These conceptual spaces are derived from similarity spaces obtained from pre-trained word embeddings. To represent these four sentence-level representations, I use the Learning and Inference with Schemas and Analogy (LISA) approach which is a hybrid symbolic-connectionist model that codes relational structure and can represent both objects and relational roles as patterns of activation over units representing semantic features.When assigning a relative probability of occurrence for potential continuations, I considered both the contribution from the truncated sentence and the contribution coming from two kinds of contextual information: a topic model and a situation model. The topic model is derived from a pre-trained topic distribution space representing the relationship between topics and words, and the situation model is derived by combining the four kinds of sentence-level representations. These contextual representations are derived from the bottom-up from the meaning expressed at the sentence level, and they, in turn, influence the predictive process by constraining the linguistic prediction via a top-down signal. I then present a multi-layered processing structure of linguistic prediction that integrates the contribution from the sentence-level representations, the contribution from the contextual level representations, and the constant interaction between the two. Preliminary empirical adequacy was assessed by three worked-out examples (high-constraining sentence, low-constraining sentence, sentence with prior discourse context) for which the theory matches the ordering that was obtained empirically.This model of linguistic prediction illustrates the crucial connection between the representational levels involved in pragmatic processing, and it conceptualizes the pragmatic stream as a processing structure. This view is compatible with recent hierarchical models of linguistic processing, and it shares some features with computationally explicit connectionist accounts of the prediction process. |
Maxime Codere Corbeil 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
Social inferencing in communication
(
Poster
)
link »
Conversations between close friends, family members, job applicants and hiring committees, as well as across-time conversations between authors and readers often include vague, suggestive, imprecise, and ambiguous utterances, which offer room for interpretation and may thus elicit various responses. These conversations seemingly contradict models of ideal communication, which focus on optimal information transfer grounded in information theory [Shannon, 1948]. Here, we emphasize the need to include social inference in models of communication, which may lead to a new formalism of communicative optimality. Recent research has demonstrated how particular social factors affect both (i) how speakers choose utterances and (ii) how conversation partners interpret each other’s responses. On the utterance choice side, politeness, face, as well as dominance and control co-determine speakers’ utterance choices and affect how direct and explicit they are [Beaver and Stanley, 2018, Degen et al., 2015, Khani et al., 2018, Yoon et al., 2020]. The speaker’s beliefs about the listener affect the level of precision in the speaker’s words, and in the case of miscalculation may lead to either not sufficiently precise or overly detailed descriptions. Both failures lead to negative social consequences for the speaker. From a machine learning (ML) perspective, the challenge may thus be posed to further develop artificial systems that choose a communicatively-adequate level of precision with greater flexibility. Ideally, such systems should not only take into account the knowledge of their conversation partner, but should also optimize the objective to effectively leave appropriate room for interpretation. In the absence of linguistic cues, listeners rely on their own beliefs to resolve ambiguity. The consequent responses thus allow us to infer reasons behind a listener’s reaction. For example, the listener’s reaction to (1)—conjoint with either a positive or negative interpretation of the expletive ‘man’—will likely reveal her political affiliation: (1) Man, George Busch won again [McCready, 2008, 675]. In other words, speakers can use observed listeners’ responses to refine their theory of mind about them [Frith and Frith, 2005], essentially pursuing inverse, social inference. Predicting and interpreting the behavior of others, including artificial agents, has been formalized as an inverse planning or inference problem [Baker et al., 2009]—essentially relying on our (typically probabilistic) expectations on how others would behave given particular circumstances [Frith and Frith, 2006]. Extending such formalisms to verbal behavior will allow building more precise models of conversation partners. Vague and ambiguous signals of the speaker open up additional room for interpretation and reaction. As a result, utterances and responses provide information about hidden cognitive aspects of speaker and listener, respectively. This information may include aspects of their respective current beliefs, desires, and intentions concerning the current conversation but also of their deeper beliefs, knowledge, and inference abilities in general [Wu et al., 2021]. The open challenge is to develop ML speech generation and comprehension systems, which take the listed deeper speech signaling considerations into account. To formalize this inference process, we are developing a recursive probabilistic processing and inference framework, formalizing how utterance choices and inferences of underlying belief systems of conversation partners may contribute to learning about each other and to attune a conversation to a particular conversation partner. Finally, besides offering a framework that has the emergent tendency to generate ambiguous utterances as well as the ability to infer characteristics of the conversation partner, we also quantify additional social implications stemming from comparing inferred partner characteristics with ones own. |
Asya Achimova · Martin V. Butz 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
Context in Automated Affect Recognition
(
Poster
)
link »
Affect recognition depends on interpreting both expressions and their associated context. While expressions can be explicitly measured with sensor technologies, the role of context is more difficult to measure because context is often left undefined. In an effort to explicitly incorporate pragmatics in automated affect recognition, we develop a framework for categorizing context. Building upon ontologies in affective science and symbolic artificial intelligence, we highlight seven key categories: ambient sensory environment, methods of measurement, semantic representation, situational constraints, temporal dynamics, sociocultural dimensions, and personalization. In this short paper, we focus on how the epistemological categories of context influence the training and evaluation of machine learning models for affect recognition. Incorporating context in the practical and theoretical development of affect recognition models is an important step to developing more precise and accurate models. |
Matt Groh · Rosalind Picard 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
Underspecification in Executable Instructions
(
Poster
)
link »
This paper researches the phenomenon of underspecified executable instructions.When people read and execute an instruction and these executions differ, it could be explained by the instruction being underspecified. We investigate this phenomenon on the Hexagon dataset and analyse the types of underspecifications. We propose to annotate instances of underspecified executable instructions and to predict if, where and how an instruction could be made more specific. |
Valentina Pyatkin · Royi Lachmy · Reut Tsarfaty 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
The gap between QUD-based topic determination and learning-based topic extraction for NLG
(
Poster
)
link »
Generated texts should not be limited to conveyed facts, but should also realize the many pragmatic aspects that make a text cohesive and coherent. Since present natural language generation (NLG) systems use learning-based methods for generation, the question comes up whether and how linguistic pragmatics -- providing elaborate theories and detailed analyses of pragmatic phenomena based on these theories -- could be considered for learning-based NLG. Using topic determination as an example we show that question-under-discussion (QUD) based theories of information structure provide deep insights on the discourse structure of texts, but that they cannot be mapped to learning approaches in a direct way. The main problem is data sparseness of QUD-based corpora, which ultimately goes back to the fact that content selection and discourse planning, the first two steps in a NLG pipeline from content determination to the final linguistic realization, concern non-linguistic content and its preparation, while deep learning methods require texts for learning the correspondences between user requests and target texts. |
Maurice Langner · Ralf Klabunde 🔗 |
Mon 9:00 a.m. - 10:30 a.m.
|
Ambiguity Advantage under Meaning Activation
(
Poster
)
link »
Regarding the presence of ambiguous words in natural language, traditional explanations have focused on the cost of added complexity that would accompany unambiguous languages. In this paper, we suggest that ambiguity may sustain as an inevitable feature of learning languages even without complexity costs. We show that ambiguous words occur more frequently and will therefore more readily be learned with the help of communication context, thus triggering more semantic activations between senses of the ambiguous word. We illustrate this through a game theoretic example. |
Liping Tang 🔗 |
Mon 10:30 a.m. - 10:50 a.m.
|
The Right Words for the Job: Coordinating on Task-Relevant Conventions via Bayesian Program Learning
(
Invited talk
)
SlidesLive Video » In this talk, I'll argue that human-like language use in a variable and non-stationary social environment requires a more radical shift in our models of meaning. People not only rely on pragmatic reasoning to enrich static literal meanings, but flexibly create new literal meanings together to suit the task at hand. In other words, the central computational problem of communication is not simply transmission in context, as in classical formulations, but continual learning within and across social contexts. As a case study, I'll present a physical assembly task where pairs of human participants worked together to reconstruct block towers. We found that human participants rapidly coordinated on new, more abstract language that captured each scene’s underlying structure. Motivated by these findings, we extend recent hierarchical models of convention formation with a Bayesian program learning module. This model suggests a path toward more adaptive language models that are able to 'find the right words for the job' and collaborate with human partners in a wider variety of novel contexts. |
Robert Hawkins 🔗 |
Mon 10:50 a.m. - 11:00 a.m.
|
Q&A
|
🔗 |
Mon 11:00 a.m. - 11:10 a.m.
|
Loopholes: a Window into Value Alignment and the Learning of Meaning
(
Contributed talk
)
link »
SlidesLive Video » Exploiting a loophole, taking advantage of the ambiguity of language to do what someone says but not what they want, is a familiar facet of fable, law, and everyday life. Engaging with loopholes requires a nuanced understanding of goals, social ambiguity, and value alignment. Scientifically, the development of loopholes can help us better understand human communication, and design better human-AI interactions. However, cognitive research on this behavior remains scarce. A survey of parents reveals that loophole behavior is prevalent, frequent, and diverse in daily parent-child interactions, emerging around ages five to six. A further experiment shows that adults consider loophole behavior as less costly than non-compliance, and children increasingly differentiate loophole behavior from non-compliance from ages four to ten. We discuss the implications and limitations of the current work, together with a proposal for a formal framework for loophole behavior. |
Sophie Bridgers · Elena Glassman · Laura Schulz · Tomer Ullman 🔗 |
Mon 11:10 a.m. - 11:15 a.m.
|
Q&A
|
🔗 |
Mon 11:15 a.m. - 11:25 a.m.
|
Intuitive Image Descriptions are Context-Sensitive
(
Contributed talk
)
link »
SlidesLive Video » Consumers of image descriptions want them to be context-sensitive, but previous crowdsourced efforts to create text from images have presented the images in isolation. We tested whether untrained crowdworkers naturally take context into account when writing image descriptions by asking them to write descriptions for images that we embedded in the first paragraph of a Wikipedia article. Our analysis shows that the produced descriptions were statistically significantly more likely to reflect contents of the article they were presented with than those of mismatched articles. These findings have implications on the extent and usefulness of training crowdworkers when developing large scale context-sensitive description corpora, as well as the development of deep learning models for automatic description generation. |
Shayan Hooshmand · Elisa Kreiss · Christopher Potts 🔗 |
Mon 11:25 a.m. - 11:30 a.m.
|
Q&A
|
🔗 |
Mon 11:30 a.m. - 11:50 a.m.
|
Incorporating Interaction in Models of Language Use
(
Invited talk
)
SlidesLive Video » Everyday conversation comes with an important affordance: interaction. Amongst other forms of metacommunication, interaction allows for the use of other-initiated repair: where a receiver signals trouble in understanding a producer’s utterance, thereby prompting the producer to repeat or clarify. This phenomenon is ubiquitous in everyday conversation, but its affordance has largely been ignored in computational models of language use and language evolution. In this talk, I explore what happens when we add other-initiated repair to (i) a model of disambiguation in language use, and (ii) a model of the cultural evolution of compositional structure in language. In the first case study, we show that interactive repair may help outsource some of the computational resource demands of pragmatic reasoning to interaction (where disambiguation takes place across multiple turns). In the second case study, we show that interactive repair may play a role in ‘locking in’ compositional structure over generations in the cultural evolution of language. |
Marieke Woensdregt 🔗 |
Mon 11:50 a.m. - 12:00 p.m.
|
Q&A
|
🔗 |
Mon 12:00 p.m. - 12:30 p.m.
|
Break / Meet-and-greet #2 link » | 🔗 |
Mon 12:30 p.m. - 12:50 p.m.
|
Living in the moment: Studying pragmatic inference with temporally sensitive measures of comprehension
(
Invited talk
)
SlidesLive Video » We will take a whirlwind, mile high, tour of the literature on the moment-to-moment processing of two simple quantity implicatures: scalar implicatures (avoidance of underinformative statements) and the inference that adjectives will be used contrastively (avoidance on overinformativity). On the basis of the scalars, I will propose that there are two routes by which implicatures are calculated: a slow bottom-up route and top-down route that leads the appearance of instantaneous implicature. This top-down route relies on the speaker’s conceptualization of the context in linguistically relevant terms. This analysis makes some novel predictions about the role of speaker modelling in the adjective inference. I’ll present unpublished data that support these new predictions. |
Jesse Snedeker 🔗 |
Mon 12:50 p.m. - 1:00 p.m.
|
Q&A
|
🔗 |
Mon 1:00 p.m. - 1:10 p.m.
|
Efficient Pragmatic Program Synthesis with Informative Specifications
(
Contributed talk
)
link »
SlidesLive Video » Providing examples is one of the most common way for end-users to interact with program synthesizers. However, program synthesis systems assume that examples consistent with the program are chosen at random, and do not exploit the fact that users choose examples pragmatically. Prior work modeled program synthesis as pragmatic communication, but required an inefficient enumeration of the entire program space. In this paper, we show that it is possible to build a program synthesizer that is both pragmatic and efficient by approximating the joint distribution of programs with a product of independent factors, and performing pragmatic inference on each factor separately. This naive factored distribution approximates the exact joint distribution well when the examples are given pragmatically, and is compatible with a very simple neuro-symbolic synthesis algorithm. |
Saujas Vaduguru · Yewen Pu · Kevin Ellis 🔗 |
Mon 1:10 p.m. - 1:15 p.m.
|
Q&A
|
🔗 |
Mon 1:15 p.m. - 1:25 p.m.
|
Multi-party referential communication in complex strategic games
(
Contributed talk
)
link »
SlidesLive Video » Verbal communication is an ubiquitous aspect of human interaction occurring in many contexts; however, it is primarily studied in the limited context of two people communicating information. Understanding communication in complex, multi-party interactions is both a scientific challenge for psycholinguistics and an engineering challenge for creating artificial agents who can participate in these richer contexts. We adapted the reference game paradigm to an online 3-player game where players refer to objects in order to coordinate selections based on the available utilities. We ran games with shared or individual payoffs and with or without access to language. Our paradigm can also be used for artificial agents; we trained reinforcement learning-based agents on the same task as a comparison. Our dataset shows the same patterns found in simpler reference games and contains rich language of reference and negotiation. |
Jessica Mankewitz · Veronica Boyce · Brandon Waldon · Georgia Loukatou · Dhara Yu · Jesse Mu · Noah Goodman · Michael C Frank 🔗 |
Mon 1:25 p.m. - 1:30 p.m.
|
Q&A
|
🔗 |
Mon 1:30 p.m. - 1:50 p.m.
|
Language, Context, and Action: A Semantic Machines View of Conversational AI
(
Invited talk
)
SlidesLive Video » Task-oriented dialog is inherently about contextual action: users address the system from a specific context and the system must decide what to do in response. This talk will present some of the core principles of the Semantic Machines team's approach to conversational AI: program synthesis for action prediction, compositionality for handling complex tasks, metacomputation for reference and revision, error handling for dialog management, and dynamic generation for truthful output. I will also mention ways in which real-world constraints can help to inform the design of conversational systems. |
Dan Klein 🔗 |
Mon 1:50 p.m. - 2:00 p.m.
|
Q&A
|
🔗 |
Mon 2:00 p.m. - 2:05 p.m.
|
Closing remarks
|
🔗 |
Author Information
Jennifer Hu (Massachusetts Institute of Technology)
Noga Zaslavsky (MIT)
Aida Nematzadeh (DeepMind)
Michael Franke (Universität Osnabrück)
Roger Levy (Massachusetts Institute of Technology)
Noah Goodman (Stanford University)
More from the Same Authors
-
2021 : DABS: a Domain-Agnostic Benchmark for Self-Supervised Learning »
Alex Tamkin · Vincent Liu · Rongfei Lu · Daniel Fein · Colin Schultz · Noah Goodman -
2021 : Self-supervised pragmatic reasoning »
Jennifer Hu · Roger Levy · Noga Zaslavsky -
2021 : Learning to solve complex tasks by growing knowledge culturally across generations »
Michael Tessler · Jason Madeano · Pedro Tsividis · Noah Goodman · Josh Tenenbaum -
2022 : Trading off Utility, Informativeness, and Complexity in Emergent Communication »
Mycal Tucker · Julie A Shah · Roger Levy · Noga Zaslavsky -
2022 : Lemma: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions »
Zhening Li · Gabriel Poesia Reis e Silva · Omar Costilla Reyes · Noah Goodman · Armando Solar-Lezama -
2022 : On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning »
Dilip Arumugam · Mark Ho · Noah Goodman · Benjamin Van Roy -
2022 : Probing Representations of Numbers in Vision and Language Models »
Ivana Kajic · Aida Nematzadeh -
2022 : In the ZONE: Measuring difficulty and progression in curriculum generation »
Rose Wang · Jesse Mu · Dilip Arumugam · Natasha Jaques · Noah Goodman -
2022 : Generalization and Translatability in Emergent Communication via Informational Constraints »
Mycal Tucker · Roger Levy · Julie A Shah · Noga Zaslavsky -
2022 : MATH-AI: Toward Human-Level Mathematical Reasoning »
Francois Charton · Noah Goodman · Behnam Neyshabur · Talia Ringer · Daniel Selsam -
2022 : Aida Nematzadeh: On Evaluating Neural Representations »
Aida Nematzadeh -
2022 : Learning Mathematical Reasoning for Education »
Noah Goodman -
2022 : Generalization and Translatability in Emergent Communication via Informational Constraints »
Mycal Tucker · Roger Levy · Julie A Shah · Noga Zaslavsky -
2022 Workshop: Information-Theoretic Principles in Cognitive Systems »
Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman -
2022 : Opening Remarks »
Noga Zaslavsky -
2022 : Invited Talk: Noah Goodman »
Noah Goodman -
2022 Poster: Assistive Teaching of Motor Control Tasks to Humans »
Megha Srivastava · Erdem Biyik · Suvir Mirchandani · Noah Goodman · Dorsa Sadigh -
2022 Poster: CLEVRER-Humans: Describing Physical and Causal Events the Human Way »
Jiayuan Mao · Xuelin Yang · Xikun Zhang · Noah Goodman · Jiajun Wu -
2022 Poster: Geoclidean: Few-Shot Generalization in Euclidean Geometry »
Joy Hsu · Jiajun Wu · Noah Goodman -
2022 Poster: Active Learning Helps Pretrained Models Learn the Intended Task »
Alex Tamkin · Dat Nguyen · Salil Deshpande · Jesse Mu · Noah Goodman -
2022 Poster: Foundation Posteriors for Approximate Probabilistic Inference »
Mike Wu · Noah Goodman -
2022 Poster: STaR: Bootstrapping Reasoning With Reasoning »
Eric Zelikman · Yuhuai Wu · Jesse Mu · Noah Goodman -
2022 Poster: Flamingo: a Visual Language Model for Few-Shot Learning »
Jean-Baptiste Alayrac · Jeff Donahue · Pauline Luc · Antoine Miech · Iain Barr · Yana Hasson · Karel Lenc · Arthur Mensch · Katherine Millican · Malcolm Reynolds · Roman Ring · Eliza Rutherford · Serkan Cabi · Tengda Han · Zhitao Gong · Sina Samangooei · Marianne Monteiro · Jacob L Menick · Sebastian Borgeaud · Andy Brock · Aida Nematzadeh · Sahand Sharifzadeh · Mikołaj Bińkowski · Ricardo Barreira · Oriol Vinyals · Andrew Zisserman · Karén Simonyan -
2022 Poster: Trading off Utility, Informativeness, and Complexity in Emergent Communication »
Mycal Tucker · Roger Levy · Julie Shah · Noga Zaslavsky -
2022 Poster: DABS 2.0: Improved Datasets and Algorithms for Universal Self-Supervision »
Alex Tamkin · Gaurab Banerjee · Mohamed Owda · Vincent Liu · Shashank Rammoorthy · Noah Goodman -
2022 Poster: Improving Intrinsic Exploration with Language Abstractions »
Jesse Mu · Victor Zhong · Roberta Raileanu · Minqi Jiang · Noah Goodman · Tim Rocktäschel · Edward Grefenstette -
2021 : Spotlight Talk: Learning to solve complex tasks by growing knowledge culturally across generations »
Noah Goodman · Josh Tenenbaum · Michael Tessler · Jason Madeano -
2021 : Multi-party referential communication in complex strategic games »
Jessica Mankewitz · Veronica Boyce · Brandon Waldon · Georgia Loukatou · Dhara Yu · Jesse Mu · Noah Goodman · Michael C Frank -
2021 : Opening remarks »
Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman -
2021 Poster: Emergent Communication of Generalizations »
Jesse Mu · Noah Goodman -
2021 Poster: Contrastive Reinforcement Learning of Symbolic Reasoning Domains »
Gabriel Poesia · WenXin Dong · Noah Goodman -
2021 Poster: Grammar-Based Grounded Lexicon Learning »
Jiayuan Mao · Freda Shi · Jiajun Wu · Roger Levy · Josh Tenenbaum -
2021 Poster: Improving Compositionality of Neural Networks by Decoding Representations to Inputs »
Mike Wu · Noah Goodman · Stefano Ermon -
2021 Panel: The Consequences of Massive Scaling in Machine Learning »
Noah Goodman · Melanie Mitchell · Joelle Pineau · Oriol Vinyals · Jared Kaplan -
2020 Poster: Language Through a Prism: A Spectral Approach for Multiscale Language Representations »
Alex Tamkin · Dan Jurafsky · Noah Goodman -
2019 : Panel Discussion »
Jacob Andreas · Edward Gibson · Stefan Lee · Noga Zaslavsky · Jason Eisner · Jürgen Schmidhuber -
2019 : Invited Talk - 2 »
Noga Zaslavsky -
2019 Poster: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2019 Spotlight: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2018 Poster: Bias and Generalization in Deep Generative Models: An Empirical Study »
Shengjia Zhao · Hongyu Ren · Arianna Yuan · Jiaming Song · Noah Goodman · Stefano Ermon -
2018 Spotlight: Bias and Generalization in Deep Generative Models: An Empirical Study »
Shengjia Zhao · Hongyu Ren · Arianna Yuan · Jiaming Song · Noah Goodman · Stefano Ermon -
2018 Poster: Multimodal Generative Models for Scalable Weakly-Supervised Learning »
Mike Wu · Noah Goodman -
2017 : Efficient human-like semantic representations via the information bottleneck principle »
Noga Zaslavsky -
2017 : Evaluating the capacity to reason about beliefs »
Aida Nematzadeh -
2017 : Morning panel discussion »
Jürgen Schmidhuber · Noah Goodman · Anca Dragan · Pushmeet Kohli · Dhruv Batra -
2017 : "Language in context" »
Noah Goodman -
2017 Poster: Learning Disentangled Representations with Semi-Supervised Deep Generative Models »
Siddharth Narayanaswamy · Brooks Paige · Jan-Willem van de Meent · Alban Desmaison · Noah Goodman · Pushmeet Kohli · Frank Wood · Philip Torr -
2016 Poster: Neurally-Guided Procedural Models: Amortized Inference for Procedural Graphics Programs using Neural Networks »
Daniel Ritchie · Anna Thomas · Pat Hanrahan · Noah Goodman -
2015 Workshop: Bounded Optimality and Rational Metareasoning »
Samuel J Gershman · Falk Lieder · Tom Griffiths · Noah Goodman -
2013 Poster: Learning and using language via recursive pragmatic reasoning about other agents »
Nathaniel J Smith · Noah Goodman · Michael C Frank -
2013 Poster: Learning Stochastic Inverses »
Andreas Stuhlmüller · Jacob Taylor · Noah Goodman -
2012 Workshop: Probabilistic Programming: Foundations and Applications (2 day) »
Vikash Mansinghka · Daniel Roy · Noah Goodman -
2012 Workshop: Probabilistic Programming: Foundations and Applications (2 day) »
Vikash Mansinghka · Daniel Roy · Noah Goodman -
2012 Poster: Burn-in, bias, and the rationality of anchoring »
Falk Lieder · Tom Griffiths · Noah Goodman -
2011 Poster: Nonstandard Interpretations of Probabilistic Programs for Efficient Inference »
David Wingate · Noah Goodman · Andreas Stuhlmueller · Jeffrey Siskind