Zero-Shot Learning (ZSL) is generally achieved via aligning the semantic relationships between the visual features and the corresponding class semantic descriptions. However, using the global features to represent fine-grained images may lead to sub-optimal results since they neglect the discriminative differences of local regions. Besides, different regions contain distinct discriminative information. The important regions should contribute more to the prediction. To this end, we propose a novel stacked semantics-guided attention (S2GA) model to obtain semantic relevant features by using individual class semantic features to progressively guide the visual features to generate an attention map for weighting the importance of different local regions. Feeding both the integrated visual features and the class semantic features into a multi-class classification architecture, the proposed framework can be trained end-to-end. Extensive experimental results on CUB and NABird datasets show that the proposed approach has a consistent improvement on both fine-grained zero-shot classification and retrieval tasks.
yunlong yu (Tianjin University)
Zhong Ji (Tianjin University)
He is currently an Associate Professor with the School of Electrical and Information Engineering, Tianjin University. His current research interests include machine learning, computer vision, multimedia understanding, and video summarization. He has authored more than 60 scientific papers, including NIPS, TNNLS, TIP, TCYB, ICME, ICIP, etc.
yanwei Fu (Fudan University, Shanghai; AItrics Inc. Seoul)
Jichang Guo (Tianjin University)
Yanwei Pang (Tianjin University)
Zhongfei (Mark) Zhang (Binghamton University)
More from the Same Authors
2019 Poster: Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition »
Satoshi Tsutsui · Yanwei Fu · David Crandall
2016 Poster: Doubly Convolutional Neural Networks »
Shuangfei Zhai · Yu Cheng · Weining Lu · Zhongfei (Mark) Zhang