Skip to yearly menu bar Skip to main content


Poster

Learning Frequency-Adapted Vision Foundation Model for Domain Generalized Semantic Segmentation

Qi Bi · Jingjun Yi · Hao Zheng · Haolan Zhan · Yawen Huang · Wei Ji · Yuexiang Li · Yefeng Zheng

East Exhibit Hall A-C #1700
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The emerging vision foundation model (VFM) has inherited the ability to generalize to unseen images.Nevertheless, the key challenge of domain-generalized semantic segmentation (DGSS) lies in the domain gap attributed to the cross-domain styles, i.e., the variance of urban landscape and environment dependencies.Hence, maintaining the style-invariant property with varying domain styles becomes the key bottleneck in harnessing VFM for DGSS. The frequency space after Haar wavelet transformation provides a feasible way to decouple the style information from the domain-invariant content, since the content and style information are retained in the low- and high- frequency components of the space, respectively. To this end, we propose a novel Frequency-Adapted (FADA) learning scheme to advance the frontier.Its overall idea is to separately tackle the content and style information by frequency tokens throughout the learning process.Particularly, the proposed FADA consists of two branches, i.e., low- and high- frequency branches. The former one is able to stabilize the scene content, while the latter one learns the scene styles and eliminates its impact to DGSS. Experiments conducted on various DGSS settings show the state-of-the-art performance of our FADA and its versatility to a variety of VFMs.Source code is available at \url{https://github.com/BiQiWHU/FADA}.

Live content is unavailable. Log in and register to view live content