Timezone: »
Generating controllable and photorealistic digital human avatars is a long-standing and important problem in Vision and Graphics. Recent methods have shown great progress in terms of either photorealism or inference speed while the combination of the two desired properties still remains unsolved. To this end, we propose a novel method, called DELIFFAS, which parameterizes the appearance of the human as a surface light field that is attached to a controllable and deforming human mesh model. At the core, we represent the light field around the human with a deformable two-surface parameterization, which enables fast and accurate inference of the human appearance. This allows perceptual supervision on the full image compared to previous approaches that could only supervise individual pixels or small patches due to their slow runtime. Our carefully designed human representation and supervision strategy leads to state-of-the-art synthesis results and inference time. The video results and code are available at https://vcai.mpi-inf.mpg.de/projects/DELIFFAS.
Author Information
Youngjoong Kwon (Robotics Institute, Carnegie Mellon University)
Lingjie Liu (University of Pennsylvania, University of Pennsylvania)
Henry Fuchs (Department of Computer Science, University of North Carolina, Chapel Hill)
Marc Habermann (Max Planck Institute for Informatics, Saarland Informatics Campus)
I am the research group leader of the Graphics and Vision for Digital Humans group at the Max Planck Institute for Informatics. My research interests lie in the field of Computer Vision, Computer Graphics, and Machine Learning. In particular, my work focuses on real-time human performance capture from single RGB videos, physical plausibility of the surface deformations and the human motion, photo-realistic animation synthesis, and learning generative 3D human characters from video. In summary, my research interests include (but are not limited to): Computer Vision, Computer Graphics, Machine Learning Human Performance Capture and Synthesis Reconstruction of Non-Rigid Deformations from RGB Video Neural Rendering Motion Capture
Christian Theobalt (MPI Informatik)
More from the Same Authors
-
2021 Spotlight: Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering »
Youngjoong Kwon · Dahun Kim · Duygu Ceylan · Henry Fuchs -
2021 Spotlight: NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction »
Peng Wang · Lingjie Liu · Yuan Liu · Christian Theobalt · Taku Komura · Wenping Wang -
2023 Poster: Weakly Supervised 3D Open-vocabulary Segmentation »
Kunhao Liu · Fangneng Zhan · Jiahui Zhang · MUYU XU · Yingchen Yu · Abdulmotaleb El Saddik · Christian Theobalt · Eric Xing · Shijian Lu -
2021 Poster: A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis »
Xingang Pan · Xudong XU · Chen Change Loy · Christian Theobalt · Bo Dai -
2021 Poster: NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction »
Peng Wang · Lingjie Liu · Yuan Liu · Christian Theobalt · Taku Komura · Wenping Wang -
2021 Poster: Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering »
Youngjoong Kwon · Dahun Kim · Duygu Ceylan · Henry Fuchs -
2020 Poster: LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration »
Bharat Lal Bhatnagar · Cristian Sminchisescu · Christian Theobalt · Gerard Pons-Moll -
2020 Oral: LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration »
Bharat Lal Bhatnagar · Cristian Sminchisescu · Christian Theobalt · Gerard Pons-Moll -
2020 Poster: Neural Sparse Voxel Fields »
Lingjie Liu · Jiatao Gu · Kyaw Zaw Lin · Tat-Seng Chua · Christian Theobalt -
2020 Spotlight: Neural Sparse Voxel Fields »
Lingjie Liu · Jiatao Gu · Kyaw Zaw Lin · Tat-Seng Chua · Christian Theobalt