Timezone: »

DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer
Wenzheng Chen · Joey Litalien · Jun Gao · Zian Wang · Clement Fuji Tsang · Sameh Khamis · Or Litany · Sanja Fidler

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @

We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths---speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIBR++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting.

Author Information

Wenzheng Chen (University of Toronto)
Joey Litalien (McGill University)
Jun Gao (University of Toronto; Nvidia)
Zian Wang (Tsinghua University)
Clement Fuji Tsang (Université de Technologie de Troyes, France)
Sameh Khamis (University of Maryland)
Or Litany (NVIDIA)
Sanja Fidler (University of Toronto)

More from the Same Authors