Timezone: »

 
Poster
DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models
Tao Yang · Yuwang Wang · Yan Lu · Nanning Zheng

Tue Dec 12 03:15 PM -- 05:15 PM (PST) @ Great Hall & Hall B1+B2 #918
Event URL: https://github.com/ThomasMrY/DisDiff »

Targeting to understand the underlying explainable factors behind observations and modeling the conditional generation process on these factors, we connect disentangled representation learning to diffusion probabilistic models (DPMs) to take advantage of the remarkable modeling ability of DPMs. We propose a new task, disentanglement of (DPMs): given a pre-trained DPM, without any annotations of the factors, the task is to automatically discover the inherent factors behind the observations and disentangle the gradient fields of DPM into sub-gradient fields, each conditioned on the representation of each discovered factor. With disentangled DPMs, those inherent factors can be automatically discovered, explicitly represented and clearly injected into the diffusion process via the sub-gradient fields. To tackle this task, we devise an unsupervised approach, named DisDiff, and for the first time achieving disentangled representation learning in the framework of DPMs. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of DisDiff.

Author Information

Tao Yang (Xi'an Jiaotong University)
Yuwang Wang (Tsinghua University, Tsinghua University)
Yan Lu (Microsoft Research Asia)
Nanning Zheng (Xi'an Jiaotong University)

More from the Same Authors