Skip to yearly menu bar Skip to main content


Poster

GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration

Silong Yong · Yaqi Xie · Simon Stepputtis · Katia Sycara

East Exhibit Hall A-C #1301
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Volume rendering in neural radiance fields is inherently time-consuming due to the large number of MLP calls on the points sampled per ray. Previous works would address this issue by introducing new neural networks or data structures. In this work, we propose GL-NeRF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature. GL-NeRF significantly reduces the number of MLP calls needed for volume rendering, introducing no additional data structures or neural networks. The simple formulation makes adopting GL-NeRF in any NeRF model possible. In the paper, we first justify the use of the Gauss-Laguerre quadrature and then demonstrate this plug-and-play attribute by implementing it in two different NeRF models. We show that with a minimal drop in performance, GL-NeRF can significantly reduce the number of MLP calls, showing the potential to speed up any NeRF model. Code can be found in project page https://silongyong.github.io/GL-NeRFprojectpage/.

Live content is unavailable. Log in and register to view live content