Skip to yearly menu bar Skip to main content


Poster

Multiview Aggregation for Learning Category-Specific Shape Reconstruction

Srinath Sridhar · Davis Rempe · Julien Valentin · Bouaziz Sofien · Leonidas Guibas

East Exhibition Hall B + C #96

Keywords: [ Supervised Deep ] [ Applications -> Visual Scene Analysis and Interpretation; Deep Learning -> Deep Autoencoders; Deep Learning ] [ Computer Vision ] [ Applications ]


Abstract:

We investigate the problem of learning category-specific 3D shape reconstruction from a variable number of RGB views of previously unobserved object instances. Most approaches for multiview shape reconstruction operate on sparse shape representations, or assume a fixed number of views. We present a method that can estimate dense 3D shape, and aggregate shape across multiple and varying number of input views. Given a single input view of an object instance, we propose a representation that encodes the dense shape of the visible object surface as well as the surface behind line of sight occluded by the visible surface. When multiple input views are available, the shape representation is designed to be aggregated into a single 3D shape using an inexpensive union operation. We train a 2D CNN to learn to predict this representation from a variable number of views (1 or more). We further aggregate multiview information by using permutation equivariant layers that promote order-agnostic view information exchange at the feature level. Experiments show that our approach is able to produce dense 3D reconstructions of objects that improve in quality as more views are added.

Live content is unavailable. Log in and register to view live content