Timezone: »
Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases. These models are generally based on learning interaction among spatial locations from raw data. The complexity of self-attention and MLP grows quadratically as the image size increases, which makes these models hard to scale up when high-resolution features are required. In this paper, we present the Global Filter Network (GFNet), a conceptually simple yet computationally efficient architecture, that learns long-term spatial dependencies in the frequency domain with log-linear complexity. Our architecture replaces the self-attention layer in vision transformers with three key operations: a 2D discrete Fourier transform, an element-wise multiplication between frequency-domain features and learnable global filters, and a 2D inverse Fourier transform. We exhibit favorable accuracy/complexity trade-offs of our models on both ImageNet and downstream tasks. Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness. Code is available at https://github.com/raoyongming/GFNet
Author Information
Yongming Rao (Tsinghua University)
Wenliang Zhao (Automation, Tsinghua University, Tsinghua University)
Zheng Zhu (Tsinghua University)
Jiwen Lu (Tsinghua University)
Jie Zhou (Tsinghua University)
More from the Same Authors
-
2022 Poster: OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression »
Wanhua Li · Xiaoke Huang · Zheng Zhu · Yansong Tang · Xiu Li · Jie Zhou · Jiwen Lu -
2022 Poster: P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting »
Ziyi Wang · Xumin Yu · Yongming Rao · Jie Zhou · Jiwen Lu -
2023 Poster: UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models »
Wenliang Zhao · Lujia Bai · Yongming Rao · Jie Zhou · Jiwen Lu -
2023 Poster: VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks »
Wenhai Wang · Zhe Chen · Xiaokang Chen · Jiannan Wu · Xizhou Zhu · Gang Zeng · Ping Luo · Tong Lu · Jie Zhou · Yu Qiao · Jifeng Dai -
2023 Poster: MCUFormer: Deploying Vision Tranformers on Microcontrollers with Limited Memory »
Yinan Liang · Ziwei Wang · Xiuwei Xu · Yansong Tang · Jie Zhou · Jiwen Lu -
2022 Spotlight: P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting »
Ziyi Wang · Xumin Yu · Yongming Rao · Jie Zhou · Jiwen Lu -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Poster: HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions »
Yongming Rao · Wenliang Zhao · Yansong Tang · Jie Zhou · Ser Nam Lim · Jiwen Lu -
2021 Poster: DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification »
Yongming Rao · Wenliang Zhao · Benlin Liu · Jiwen Lu · Jie Zhou · Cho-Jui Hsieh -
2017 Poster: Runtime Neural Pruning »
Ji Lin · Yongming Rao · Jiwen Lu · Jie Zhou