Timezone: »

Learning to Specialize with Knowledge Distillation for Visual Question Answering
Jonghwan Mun · Kimin Lee · Jinwoo Shin · Bohyung Han

Tue Dec 04 02:00 PM -- 04:00 PM (PST) @ Room 210 #75

Visual Question Answering (VQA) is a notoriously challenging problem because it involves various heterogeneous tasks defined by questions within a unified framework. Learning specialized models for individual types of tasks is intuitively attracting but surprisingly difficult; it is not straightforward to outperform naive independent ensemble approach. We present a principled algorithm to learn specialized models with knowledge distillation under a multiple choice learning (MCL) framework, where training examples are assigned dynamically to a subset of models for updating network parameters. The assigned and non-assigned models are learned to predict ground-truth answers and imitate their own base models before specialization, respectively. Our approach alleviates the limitation of data deficiency in existing MCL frameworks, and allows each model to learn its own specialized expertise without forgetting general knowledge. The proposed framework is model-agnostic and applicable to any tasks other than VQA, e.g., image classification with a large number of labels but few per-class examples, which is known to be difficult under existing MCL schemes. Our experimental results indeed demonstrate that our method outperforms other baselines for VQA and image classification.

Author Information

Jonghwan Mun (POSTECH)
Kimin Lee (Korea Advanced Institute of Science and Technology)
Jinwoo Shin (KAIST; AITRICS)
Bohyung Han (Seoul National University)

More from the Same Authors