Skip to yearly menu bar Skip to main content


Poster

Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis

Liang Han · Junsheng Zhou · Yu-Shen Liu · Zhizhong Han

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Learning to synthesize novel views from sparse inputs is a vital yet challenging task in 3D computer vision. Previous methods explore 3D Gaussian Splatting with neural priors (e.g. depth priors) as the additional supervisions, demonstrating promising quality and efficiency compared to the NeRF based methods. However, the neural priors from 2D pretrained models are often noisy and blurred, which struggle in precisely guiding radiance field learning. In this paper, we propose a novel prior-free method for sparse view Gaussian Splatting by exploring the self supervisions inherent in the binocular stereo consistency between each pair of binocular images constructed with disparity-guided image warping. We additionally introduce a Gaussian Opacity constraint which regularizes the Gaussians locations and avoids Gaussian redundancy for improving the robustness and efficiency of sparse-view Gaussian Splatting. Extensive experiments on the LLFF, DTU, and Blender datasets demonstrate that our method significantly outperforms the state-of-the-art methods.

Live content is unavailable. Log in and register to view live content