Timezone: »

Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
Shihao Zhao · Dongdong Chen · Yen-Chun Chen · Jianmin Bao · Shaozhe Hao · Lu Yuan · Kwan-Yee K. Wong

Tue Dec 12 08:45 AM -- 10:45 AM (PST) @ Great Hall & Hall B1+B2 #617
Event URL: https://github.com/ShihaoZhaoZSH/Uni-ControlNet »

Text-to-Image diffusion models have made tremendous progress over the past two years, enabling the generation of highly realistic images based on open-domain text descriptions. However, despite their success, text descriptions often struggle to adequately convey detailed controls, even when composed of long and complex texts. Moreover, recent studies have also shown that these models face challenges in understanding such complex texts and generating the corresponding images. Therefore, there is a growing need to enable more control modes beyond text description. In this paper, we introduce Uni-ControlNet, a unified framework that allows for the simultaneous utilization of different local controls (e.g., edge maps, depth map, segmentation masks) and global controls (e.g., CLIP image embeddings) in a flexible and composable manner within one single model. Unlike existing methods, Uni-ControlNet only requires the fine-tuning of two additional adapters upon frozen pre-trained text-to-image diffusion models, eliminating the huge cost of training from scratch. Moreover, thanks to some dedicated adapter designs, Uni-ControlNet only necessitates a constant number (i.e., 2) of adapters, regardless of the number of local or global controls used. This not only reduces the fine-tuning costs and model size, making it more suitable for real-world deployment, but also facilitate composability of different conditions. Through both quantitative and qualitative comparisons, Uni-ControlNet demonstrates its superiority over existing methods in terms of controllability, generation quality and composability. Code is available at https://github.com/ShihaoZhaoZSH/Uni-ControlNet.

Author Information

Shihao Zhao (The University of Hong Kong)
Shihao Zhao

1. Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, Yu-Gang Jiang. Clean-label backdoor attacks on video recognition models. CVPR, 2020. 2. Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang JiangWhat Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space. Arxiv, 2021. 3. Bojia Zi, Shihao Zhao, Xingjun Ma, Yu-Gang Jiang. Revisiting adversarial robustness distillation: Robust soft labels make student better. ICCV, 2021. 4. Shaozhe Hao, Kai Han, Shihao Zhao, Kwan-Yee K. Wong. ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation. Arxiv, 2023. 5. Shihao Zhao, Dongdong Chen, Yen-Chun Chen, Jianmin Bao, Shaozhe Hao, Lu Yuan, Kwan-Yee K. Wong. Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models. NeurIPS, 2023.

Dongdong Chen (Microsoft Cloud AI)
Yen-Chun Chen (Microsoft)
Jianmin Bao (Microsoft Research)
Shaozhe Hao (University of Hong Kong)
Lu Yuan (Microsoft)
Kwan-Yee K. Wong (The University of Hong Kong)

More from the Same Authors