Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

LKA: Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Liam Chalcroft · Ruben Lourenço Pereira · Mikael Brudfors · Andrew Kayser · Mark D'Esposito · Cathy Price · Ioannis Pappas · John Ashburner


Abstract:

Deep learning models like Convolutional Neural Networks (CNNs) and Transformers have revolutionized image segmentation. While CNNs are computationally efficient due to their parameter-sharing mechanisms, transformers excel in capturing long-range data relationships and global context, at the cost of computational resources because of their self-attention layers. In this study, we evaluate an alternative approach that harnesses the benefits of both: a transformer architecture built exclusively with convolutions. We showcase that our model outperforms leading methods like nnUNet and Swin-UNETR in glioblastoma segmentation and provide evidence for the choice of architecture having control over models' texture bias.

Chat is not available.