Skip to yearly menu bar Skip to main content


Poster

InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint

Zhenzhi Wang · Jingbo Wang · Yixuan Li · Dahua Lin · Bo Dai

East Exhibit Hall A-C #2400
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Text-conditioned human motion synthesis has made remarkable progress with the emergence of diffusion models in recent research. However, the majority of these motion diffusion models are primarily designed for a single character and overlook multi-human interactions. In our approach, we strive to explore this problem by synthesizing human motion with interactions for a group of characters of any size. The key aspect of our approach is the adaptation of human-wise interactions as pairs of human joints that can be either in contact or separated by a desired distance. In contrast to existing methods that necessitate training motion generation models on multi-human motion datasets with a fixed number of characters, our approach inherently possesses the flexibility to model human interactions involving an arbitrary number of individuals, thereby transcending the limitations imposed by the training data. We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs. It consists of a motion controller and an inverse kinematics guidance module that realistically and accurately aligns the joints of synthesized characters to the desired location. Furthermore, we demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model (LLM). Experimental results highlight the capability of our framework to generate interactions with multiple human characters and its potential to work with off-the-shelf physics-based character simulators.

Live content is unavailable. Log in and register to view live content