Poster
Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis
Taihang Hu · Linxuan Li · Joost van de Weijer · Hongcheng Gao · Fahad Shahbaz Khan · Jian Yang · Ming-Ming Cheng · KAI WANG · Yaxing Wang
East Exhibit Hall A-C #1601
Although text-to-image (T2I) models exhibit remarkable generation capabilities,they frequently fail to accurately bind semantically related objects or attributesin the input prompts; a challenge termed semantic binding. Previous approacheseither involve intensive fine-tuning of the entire T2I model or require users orlarge language models to specify generation layouts, adding complexity. In thispaper, we define semantic binding as the task of associating a given object with itsattribute, termed attribute binding, or linking it to other related sub-objects, referredto as object binding. We introduce a novel method called Token Merging (ToMe),which enhances semantic binding by aggregating relevant tokens into a singlecomposite token. This ensures that the object, its attributes and sub-objects all sharethe same cross-attention map. Additionally, to address potential confusion amongmain objects with complex textual prompts, we propose end token substitution asa complementary strategy. To further refine our approach in the initial stages ofT2I generation, where layouts are determined, we incorporate two auxiliary losses,an entropy loss and a semantic binding loss, to iteratively update the compositetoken to improve the generation integrity. We conducted extensive experiments tovalidate the effectiveness of ToMe, comparing it against various existing methodson the T2I-CompBench and our proposed GPT-4o object binding benchmark. Ourmethod is particularly effective in complex scenarios that involve multiple objectsand attributes, which previous methods often fail to address. The code will be publicly available at https://github.com/hutaihang/ToMe
Live content is unavailable. Log in and register to view live content