Timezone: »

 
Poster
HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes
Zan Wang · Yixin Chen · Tengyu Liu · Yixin Zhu · Wei Liang · Siyuan Huang

@

Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characters of the existing datasets on Human-Scene Interaction (HSI); they only have limited scale/quality and lack semantics. To fill in the gap, we propose a large-scale and semantic-rich synthetic HSI dataset, denoted as HUMANISE, by aligning the captured human motion sequences with various 3D indoor scenes. We automatically annotate the aligned motions with language descriptions that depict the action and the individual interacting objects; e.g., sit on the armchair near the desk. HUMANIZE thus enables a new generation task, language-conditioned human motion generation in 3D scenes. The proposed task is challenging as it requires joint modeling of the 3D scene, human motion, and natural language. To tackle this task, we present a novel scene-and-language conditioned generative model that can produce 3D human motions of the desirable action interacting with the specified objects. Our experiments demonstrate that our model generates diverse and semantically consistent human motions in 3D scenes.

Author Information

Zan Wang (Beijing Institute of Technology)
Yixin Chen (UCLA)
Tengyu Liu (Beijing Institute of General Artificial Intelligence)
Tengyu Liu

I am currently a researcher at the General Vision Lab of Beijing Institute of General Artifical Intelligence (BIGAI). I obtained my PhD degree in computer science from UCLA in 2021 under the supervision of Prof. Song-Chun Zhu. Before that, I received my master’s degree in computer science from UCLA and my bachelor’s degree in computer science from UIUC. My research interest lies in the intersection between 3D computer vision, computer graphics and robotics. My long term goal is to create intelligent agents that can interact with virtual or physical environments just like us humans do. My recent works include dexterous grasping with arbitrary hands, and reconstruction/generation of dynamic scenes with humans.

Yixin Zhu (Peking University)
Wei Liang (Beijing Institute of Technology)
Siyuan Huang (University of California, Los Angeles)

More from the Same Authors