Timezone: »
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions.
Author Information
Vincent Sitzmann (MIT)
Vincent is an incoming Assistant Professor at MIT EECS, where he will lead the Scene Representation Group (scenerepresentations.org). Currently, he is a Postdoc at MIT's CSAIL with Josh Tenenbaum, Bill Freeman, and Fredo Durand. He finished his Ph.D. at Stanford University. His research interest lies in neural scene representations - the way neural networks learn to represent information on our world. His goal is to allow independent agents to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI.
Julien N.P Martel (Stanford University)
Alexander Bergman (Stanford University)
David Lindell (Stanford University)
Gordon Wetzstein (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Oral: Implicit Neural Representations with Periodic Activation Functions »
Wed. Dec 9th 02:00 -- 02:15 AM Room Orals & Spotlights: Deep Learning/Theory
More from the Same Authors
-
2021 Spotlight: Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering »
Vincent Sitzmann · Semon Rezchikov · Bill Freeman · Josh Tenenbaum · Fredo Durand -
2021 : 3D Neural Scene Representations for Visuomotor Control »
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba -
2021 : 3D Neural Scene Representations for Visuomotor Control »
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba -
2021 : 3D Neural Scene Representations for Visuomotor Control »
Yunzhu Li · Shuang Li · Vincent Sitzmann · Pulkit Agrawal · Antonio Torralba -
2021 : Gordon Wetzstein Talk »
Gordon Wetzstein -
2021 Poster: Learning Signal-Agnostic Manifolds of Neural Fields »
Yilun Du · Katie Collins · Josh Tenenbaum · Vincent Sitzmann -
2021 Poster: Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering »
Vincent Sitzmann · Semon Rezchikov · Bill Freeman · Josh Tenenbaum · Fredo Durand -
2021 Poster: Fast Training of Neural Lumigraph Representations using Meta Learning »
Alexander Bergman · Petr Kellnhofer · Gordon Wetzstein -
2020 Poster: MetaSDF: Meta-Learning Signed Distance Functions »
Vincent Sitzmann · Eric Chan · Richard Tucker · Noah Snavely · Gordon Wetzstein -
2019 Poster: Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations »
Vincent Sitzmann · Michael Zollhoefer · Gordon Wetzstein -
2019 Oral: Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations »
Vincent Sitzmann · Michael Zollhoefer · Gordon Wetzstein