Timezone: »

 
Poster
Dilated Recurrent Neural Networks
Shiyu Chang · Yang Zhang · Wei Han · Mo Yu · Xiaoxiao Guo · Wei Tan · Xiaodong Cui · Michael Witbrock · Mark Hasegawa-Johnson · Thomas Huang

Mon Dec 04 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #104

Learning with recurrent neural networks (RNNs) on long sequences is a notoriously difficult task. There are three major challenges: 1) complex dependencies, 2) vanishing and exploding gradients, and 3) efficient parallelization. In this paper, we introduce a simple yet effective RNN connection structure, the DilatedRNN, which simultaneously tackles all of these challenges. The proposed architecture is characterized by multi-resolution dilated recurrent skip connections and can be combined flexibly with diverse RNN cells. Moreover, the DilatedRNN reduces the number of parameters needed and enhances training efficiency significantly, while matching state-of-the-art performance (even with standard RNN cells) in tasks involving very long-term dependencies. To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures. We rigorously prove the advantages of the DilatedRNN over other recurrent neural architectures. The code for our method is publicly available at https://github.com/code-terminator/DilatedRNN.

Author Information

Shiyu Chang (IBM T.J. Watson Research Center)
Yang Zhang (IBM T. J. Watson Research)
Wei Han (University of Illinois at Urbana-Champaign)
Mo Yu (Johns Hopkins University)
Xiaoxiao Guo (IBM Research)
Wei Tan (IBM T. J. Watson Research Center)
Xiaodong Cui (IBM T. J. Watson Research Center)
Michael Witbrock (IBM Research, USA)
Mark Hasegawa-Johnson (University of Illinois)

Professor Mark Hasegawa-Johnson (Fellow of the ASA, 2011, Fellow of the IEEE, 2020) has been on the faculty at the University of Illinois (ECE Department) since 1999. His Ph.D. thesis (MIT, 1996), "Formant and Burst Spectral Measures with Quantitative Error Models for Speech Sound Classification," initiated a lifelong career in the mathematical representation of linguistic knowledge. He is Treasurer of ISCA, Senior Area Editor of the IEEE Transactions on Audio, Speech and Language, a reviewer for the NSF, NIH, EPSRC, NWO, and QNRF, and was plenary speaker at the 2020 IEEE Workshop on Automatic Speech Recognition and Understanding.

Thomas Huang (UIUC)

More from the Same Authors