Timezone: »

Attention in Convolutional LSTM for Gesture Recognition
Liang Zhang · Guangming Zhu · Lin Mei · Peiyi Shen · Syed Afaq Ali Shah · Mohammed Bennamoun

Tue Dec 04 02:00 PM -- 04:00 PM (PST) @ Room 210 #73

Convolutional long short-term memory (LSTM) networks have been widely used for action/gesture recognition, and different attention mechanisms have also been embedded into the LSTM or the convolutional LSTM (ConvLSTM) networks. Based on the previous gesture recognition architectures which combine the three-dimensional convolution neural network (3DCNN) and ConvLSTM, this paper explores the effects of attention mechanism in ConvLSTM. Several variants of ConvLSTM are evaluated: (a) Removing the convolutional structures of the three gates in ConvLSTM, (b) Applying the attention mechanism on the input of ConvLSTM, (c) Reconstructing the input and (d) output gates respectively with the modified channel-wise attention mechanism. The evaluation results demonstrate that the spatial convolutions in the three gates scarcely contribute to the spatiotemporal feature fusion, and the attention mechanisms embedded into the input and output gates cannot improve the feature fusion. In other words, ConvLSTM mainly contributes to the temporal fusion along with the recurrent steps to learn the long-term spatiotemporal features, when taking as input the spatial or spatiotemporal features. On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM. The code of the LSTM variants is publicly available.

Author Information

Liang Zhang (School of Computer Science and Technology, Xidian University, China)
Guangming Zhu (Xidian University)
Lin Mei (The Third Research Institute of Ministry of Public Security, China)
Peiyi Shen (School of Software, Xidian University, China)
Syed Afaq Ali Shah (Department of Computer Science and Software Engineering, The University of Western Australia)
Mohammed Bennamoun (University of Western Australia)

More from the Same Authors