Timezone: »

 
Poster
Stand-Alone Self-Attention in Vision Models
Niki Parmar · Prajit Ramachandran · Ashish Vaswani · Irwan Bello · Anselm Levskaya · Jonathon Shlens

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #72

Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention to ResNet-50 produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a fully self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner's toolbox.

Author Information

Niki Parmar (Google)
Prajit Ramachandran (Google Brain)
Ashish Vaswani (Google Brain)
Irwan Bello (Google Brain)
Anselm Levskaya (Google)
Jonathon Shlens (Google Research)

More from the Same Authors