Skip to yearly menu bar Skip to main content


Poster

Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance

Jiwan Hur · DongJae Lee · Gyojin Han · Jaehyun Choi · Yunho Jeon · Junmo Kim

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Masked generative models (MGMs) have shown impressive generative ability while providing an order of magnitude efficient sampling steps compared to continuous diffusion models. However, MGMs still underperform in image synthesis compared to recent well-developed continuous diffusion models with similar size in terms of quality and diversity of generated samples. A key factor in the performance of continuous diffusion models stems from the guidance methods, which enhance the sample quality at the expense of diversity. In this paper, we extend these guidance methods to generalized guidance formulation for MGMs and propose a self-guidance sampling method, which leads to better generation quality and diversity. The proposed approach leverages an auxiliary task for semantic smoothing in vector-quantized token space, analogous to the Gaussian blur in continuous pixel space. Equipped with the parameter-efficient fine-tuning method, MGMs with our self-guidance technique achieve a superior quality-diversity trade-off, outperforming existing sampling methods in MGMs with more efficient training and sampling costs. We further extensively experiment with the various sampling hyperparameters, confirming the effectiveness of the proposed guidance.

Live content is unavailable. Log in and register to view live content