Skip to yearly menu bar Skip to main content


Poster

MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts

Jie Zhu · Yixiong Chen · Mingyu Ding · Ping Luo · Leye Wang · Jingdong Wang

East Exhibit Hall A-C #2611
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Text-to-image diffusion has attracted vast attention due to its impressive image-generation capabilities. However, when it comes to human-centric text-to-image generation, particularly in the context of faces and hands, the results often fall short of naturalness due to insufficient training priors. We alleviate the issue in this work from two perspectives. 1) From the data aspect, we carefully collect a human-centric dataset comprising over one million high-quality human-in-the-scene images and two specific sets of close-up images of faces and hands. These datasets collectively provide a rich prior knowledge base to enhance the human-centric image generation capabilities of the diffusion model. 2) On the methodological front, we propose a simple yet effective method called Mixture of Low-rank Experts (MoLE) by considering low-rank modules trained on close-up hand and face images respectively as experts. This concept draws inspiration from our observation of low-rank refinement, where a low-rank module trained by a customized close-up dataset has the potential to enhance the corresponding image part when applied at an appropriate scale. To validate the superiority of MoLE in the context of human-centric image generation compared to state-of-the-art, we construct two benchmarks and perform evaluations with diverse metrics and human studies. Project webpage, datasets, and code are released at https://sites.google.com/view/mole4diffuser/.

Live content is unavailable. Log in and register to view live content