Skip to yearly menu bar Skip to main content


Poster

Multimodal Residual Learning for Visual QA

Jin-Hwa Kim · Sang-Woo Lee · Donghyun Kwak · Min-Oh Heo · Jeonghee Kim · Jung-Woo Ha · Byoung-Tak Zhang

Area 5+6+7+8 #143

Keywords: [ (Other) Machine Learning Topics ] [ (Other) Classification ] [ (Application) Object and Pattern Recognition ] [ (Application) Computer Vision ] [ Deep Learning or Neural Networks ]


Abstract:

Deep neural networks continue to advance the state-of-the-art of image recognition tasks with various methods. However, applications of these methods to multimodality remain limited. We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning. Unlike the deep residual learning, MRN effectively learns the joint representation from visual and language information. The main idea is to use element-wise multiplication for the joint residual mappings exploiting the residual learning of the attentional models in recent studies. Various alternative models introduced by multimodality are explored based on our study. We achieve the state-of-the-art results on the Visual QA dataset for both Open-Ended and Multiple-Choice tasks. Moreover, we introduce a novel method to visualize the attention effect of the joint representations for each learning block using back-propagation algorithm, even though the visual features are collapsed without spatial information.

Live content is unavailable. Log in and register to view live content