Timezone: »

Paraphrasing Is All You Need for Novel Object Captioning
Cheng-Fu Yang · Yao-Hung Hubert Tsai · Wan-Cyuan Fan · Russ Salakhutdinov · Louis-Philippe Morency · Frank Wang

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #635

Novel object captioning (NOC) aims to describe images containing objects without observing their ground truth captions during training. Due to the absence of caption annotation, captioning models cannot be directly optimized via sequence-to-sequence training or CIDEr optimization. As a result, we present Paraphrasing-to-Captioning (P2C), a two-stage learning framework for NOC, which would heuristically optimize the output captions via paraphrasing. With P2C, the captioning model first learns paraphrasing from a language model pre-trained on text-only corpus, allowing expansion of the word bank for improving linguistic fluency. To further enforce the output caption sufficiently describing the visual content of the input image, we perform self-paraphrasing for the captioning model with fidelity and adequacy objectives introduced. Since no ground truth captions are available for novel object images during training, our P2C leverages cross-modality (image-text) association modules to ensure the above caption characteristics can be properly preserved. In the experiments, we not only show that our P2C achieves state-of-the-art performances on nocaps and COCO Caption datasets, we also verify the effectiveness and flexibility of our learning framework by replacing language and cross-modality association models for NOC. Implementation details and code are available in the supplementary materials.

Author Information

Cheng-Fu Yang (UCLA)
Yao-Hung Hubert Tsai (Carnegie Mellon University)
Wan-Cyuan Fan (National Taiwan University)
Russ Salakhutdinov (Carnegie Mellon University)
Louis-Philippe Morency (Carnegie Mellon University)
Frank Wang (NVIDIA)

More from the Same Authors