Timezone: »
Despite some exciting progress on high-quality image generation from structured (scene graphs) or free-form (sentences) descriptions, most of them only guarantee the image-level semantical consistency, i.e. the generated image matching the semantic meaning of the description. They still lack the investigations on synthesizing the images in a more controllable way, like finely manipulating the visual appearance of every object. Therefore, to generate the images with preferred objects and rich interactions, we propose a semi-parametric method, PasteGAN, for generating the image from the scene graph and the image crops, where spatial arrangements of the objects and their pair-wise relationships are defined by the scene graph and the object appearances are determined by the given object crops. To enhance the interactions of the objects in the output, we design a Crop Refining Network and an Object-Image Fuser to embed the objects as well as their relationships into one map. Multiple losses work collaboratively to guarantee the generated images highly respecting the crops and complying with the scene graphs while maintaining excellent image quality. A crop selector is also proposed to pick the most-compatible crops from our external object tank by encoding the interactions around the objects in the scene graph if the crops are not provided. Evaluated on Visual Genome and COCO-Stuff dataset, our proposed method significantly outperforms the SOTA methods on Inception Score, Diversity Score and Fréchet Inception Distance. Extensive experiments also demonstrate our method’s ability to generate complex and diverse images with given objects. The code is available at https://github.com/yikang-li/PasteGAN.
Author Information
Yikang LI (The Chinese University of Hong Kong; Sensetime)
Tao Ma (Northwestern Polytechnical University)
Yeqi Bai (Nanyang Technological University)
Nan Duan (Microsoft Research Asia)
Sining Wei (Microsoft Research)
Xiaogang Wang (The Chinese University of Hong Kong)
More from the Same Authors
-
2021 : CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation »
Shuai Lu · Daya Guo · Shuo Ren · Junjie Huang · Alexey Svyatkovskiy · Ambrosio Blanco · Colin Clement · Dawn Drain · Daxin Jiang · Duyu Tang · Ge Li · Lidong Zhou · Linjun Shou · Long Zhou · Michele Tufano · MING GONG · Ming Zhou · Nan Duan · Neel Sundaresan · Shao Kun Deng · Shengyu Fu · Shujie LIU -
2022 Poster: Less-forgetting Multi-lingual Fine-tuning »
Yuren Mao · Yaobo Liang · Nan Duan · Haobo Wang · Kai Wang · Lu Chen · Yunjun Gao -
2023 Poster: AD-PT: Autonomous Driving Pre-Training with Large-scale Point Cloud Dataset »
Jiakang Yuan · Bo Zhang · Xiangchao Yan · Botian Shi · Tao Chen · Yikang LI · Yu Qiao -
2023 Poster: A Unified Conditional Framework for Diffusion-based Image Restoration »
Yi Zhang · Xiaoyu Shi · Dasong Li · Xiaogang Wang · Jian Wang · Hongsheng Li -
2023 Poster: RangePerception: Taming LiDAR Range View for Efficient and Accurate 3D Object Detection »
Yeqi BAI · Ben Fei · Youquan Liu · Tao MA · Yuenan Hou · Botian Shi · Yikang LI -
2023 Poster: AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation »
Tong Wu · Zhihao Fan · Xiao Liu · Yeyun Gong · yelong shen · Jian Jiao · Hai-Tao Zheng · Juntao Li · zhongyu wei · Jian Guo · Nan Duan · Weizhu Chen -
2022 Spotlight: Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs »
Jinguo Zhu · Xizhou Zhu · Wenhai Wang · Xiaohua Wang · Hongsheng Li · Xiaogang Wang · Jifeng Dai -
2022 Poster: NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis »
Jian Liang · Chenfei Wu · Xiaowei Hu · Zhe Gan · Jianfeng Wang · Lijuan Wang · Zicheng Liu · Yuejian Fang · Nan Duan -
2022 Poster: Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs »
Jinguo Zhu · Xizhou Zhu · Wenhai Wang · Xiaohua Wang · Hongsheng Li · Xiaogang Wang · Jifeng Dai -
2022 Poster: LogiGAN: Learning Logical Reasoning via Adversarial Pre-training »
Xinyu Pi · Wanjun Zhong · Yan Gao · Nan Duan · Jian-Guang Lou -
2021 Poster: Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering »
Weijiang Yu · Haoteng Zheng · Mengfei Li · Lei Ji · Lijun Wu · Nong Xiao · Nan Duan -
2021 Poster: ReSSL: Relational Self-Supervised Learning with Weak Augmentation »
Mingkai Zheng · Shan You · Fei Wang · Chen Qian · Changshui Zhang · Xiaogang Wang · Chang Xu -
2019 Poster: A Tensorized Transformer for Language Modeling »
Xindian Ma · Peng Zhang · Shuai Zhang · Nan Duan · Yuexian Hou · Ming Zhou · Dawei Song -
2019 Poster: Learning to Predict Layout-to-image Conditional Convolutions for Semantic Image Synthesis »
Xihui Liu · Guojun Yin · Jing Shao · Xiaogang Wang · Hongsheng Li -
2018 Poster: FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification »
Yixiao Ge · Zhuowan Li · Haiyu Zhao · Guojun Yin · Shuai Yi · Xiaogang Wang · Hongsheng Li -
2018 Poster: Dialog-to-Action: Conversational Question Answering Over a Large-Scale Knowledge Base »
Daya Guo · Duyu Tang · Nan Duan · Ming Zhou · Jian Yin -
2017 Poster: Learning Deep Structured Multi-Scale Features using Attention-Gated CRFs for Contour Prediction »
Dan Xu · Wanli Ouyang · Xavier Alameda-Pineda · Elisa Ricci · Xiaogang Wang · Nicu Sebe -
2016 Poster: CRF-CNN: Modeling Structured Information in Human Pose Estimation »
Xiao Chu · Wanli Ouyang · Hongsheng Li · Xiaogang Wang -
2014 Poster: Multi-View Perceptron: a Deep Model for Learning Face Identity and View Representations »
Zhenyao Zhu · Ping Luo · Xiaogang Wang · Xiaoou Tang -
2014 Poster: Deep Learning Face Representation by Joint Identification-Verification »
Yi Sun · Yuheng Chen · Xiaogang Wang · Xiaoou Tang