Timezone: »
Long Short-Term Memory (LSTM) is a popular approach to boosting the ability of Recurrent Neural Networks to store longer term temporal information. The capacity of an LSTM network can be increased by widening and adding layers. However, usually the former introduces additional parameters, while the latter increases the runtime. As an alternative we propose the Tensorized LSTM in which the hidden states are represented by tensors and updated via a cross-layer convolution. By increasing the tensor size, the network can be widened efficiently without additional parameters since the parameters are shared across different locations in the tensor; by delaying the output, the network can be deepened implicitly with little additional runtime since deep computations for each timestep are merged into temporal computations of the sequence. Experiments conducted on five challenging sequence learning tasks show the potential of the proposed model.
Author Information
Zhen He (University College London)
Shaobing Gao (Sichuan University)
I am interested in Computer Vision and Biologically Inspired Vision (in particular, Color Vision, Visual Adaptation, and Neural Network). Shao-Bing Gao received his Ph.D. degree from UESTC, Chengdu, China, in 2017. He was with the Institute of Behavioral Neuroscience, UCL, UK, as a Joint Ph.D. He is currently an associate professor in College of Computer Science, Sichuan University. His research interests include biologically inspired vision and image processing. He has authored or co-authored over 15 articles on international high-impact journals and conferences including TPAMI, TIP, CVPR, ICCV, ECCV, and NIPS. He has also severed as an active reviewer for many journals including TIP, CVIU, TCSVT, TII, SPL, and so on.
Liang Xiao (National University of Defense Technology)
Daxue Liu (National University of Defense Technology)
Hangen He (National University of Defense Technology)
David Barber (University College London)
More from the Same Authors
-
2021 : Adaptive Optimization with Examplewise Gradients »
Julius Kunze · James Townsend · David Barber -
2018 Poster: Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting »
Hippolyt Ritter · Aleksandar Botev · David Barber -
2018 Poster: Modular Networks: Learning to Decompose Neural Computation »
Louis Kirsch · Julius Kunze · David Barber -
2018 Poster: Generative Neural Machine Translation »
Harshil Shah · David Barber -
2017 Poster: Thinking Fast and Slow with Deep Learning and Tree Search »
Thomas Anthony · Zheng Tian · David Barber