Skip to yearly menu bar Skip to main content


Poster

Learning to Merge Tokens via Decoupled Embedding for Efficient Vision Transformers

Dong Hoon Lee · Seunghoon Hong

East Exhibit Hall A-C #1403
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Recent token reduction methods for Vision Transformers (ViTs) incorporate token merging, which measures the similarities between token embeddings and combines the most similar pairs.However, their merging policies are directly dependent on intermediate features in ViTs, which prevents exploiting features tailored for merging and requires end-to-end training to improve token merging.In this paper, we propose Decoupled Token Embedding for Merging (DTEM) that enhances token merging through a decoupled embedding learned via a continuously relaxed token merging process.Our method introduces a lightweight embedding module decoupled from the ViT forward pass to extract dedicated features for token merging, thereby addressing the restriction from using intermediate features.The continuously relaxed token merging, applied during training, enables us to learn the decoupled embeddings in a differentiable manner.Thanks to the decoupled structure, our method can be seamlessly integrated into existing ViT backbones and trained either modularly by learning only the decoupled embeddings or end-to-end by fine-tuning. We demonstrate the applicability of DTEM on various tasks, including classification, captioning, and segmentation, with consistent improvement in token merging.Especially in the ImageNet-1k classification, DTEM achieves a 37.2\% reduction in FLOPs while maintaining a top-1 accuracy of 79.85\% with DeiT-small.

Live content is unavailable. Log in and register to view live content