Skip to yearly menu bar Skip to main content


Poster

$\textit{Bifr\"ost}$: 3D-Aware Image Composing with Language Instructions

Lingxiao Li · Kaixiong Gong · Wei-Hong Li · xili dai · Tao Chen · Xiaojun Yuan · Xiangyu Yue

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: This paper introduces $\textit{Bifr\"ost}$, a novel 3D-aware framework that is built upon diffusion models to perform instruction-based image composition. Previous methods concentrate on image compositing at the 2D level, which fall short in handling complex spatial relationships ($\textit{e.g.}$, occlusion). $\textit{Bifr\"ost}$ addresses these issues by training MLLM as a 2.5D location predictor and integrating depth maps as an extra condition during the generation process to bridge the gap between 2D and 3D, which enhances spatial comprehension and supports sophisticated spatial interactions. Our method begins by fine-tuning MLLM with a custom counterfactual dataset to predict 2.5D object locations in complex backgrounds from language instructions. Then, the image-composing model is uniquely designed to process multiple types of input features, enabling it to perform high-fidelity image compositions that consider occlusion, depth blur, and image harmonization. Extensive qualitative and quantitative evaluations demonstrate that $\textit{Bifr\"ost}$ significantly outperforms existing methods, providing a robust solution for generating realistically composed images in scenarios demanding intricate spatial understanding. This work not only pushes the boundaries of generative image compositing but also reduces reliance on expensive annotated datasets by effectively utilizing existing resources in innovative ways.

Live content is unavailable. Log in and register to view live content