Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Models and Downstream Applications

Few-Shot Out-of-Domain Transfer of Natural Language Explanations

Yordan Yordanov · Vid Kocijan · Thomas Lukasiewicz · Oana M Camburu


Abstract:

Recently, there has been an increasing interest in models that generate natural language explanations (NLEs) for their decisions. However, training a model to explain its decisions in natural language requires the acquisition of task-specific NLEs, which is time- and resource-consuming. A potential solution is the out-of-domain transfer of NLEs, where explainability is transferred from a domain with rich data to a domain with scarce data via few-shot transfer learning. In this work, we introduce and compare four approaches for few-shot transfer learning for NLEs. We transfer explainability from the natural language inference domain, where a large dataset of human-written NLEs already exists, to the domains of hard cases of pronoun resolution, and commonsense validation. Our results demonstrate that few-shot transfer far outperforms both zero-shot transfer and single-task training with few examples. We also investigate the scalability of the few-shot transfer of explanations, both in terms of training data and model size.