Skip to yearly menu bar Skip to main content


Poster

Hessian-free Optimization for Learning Deep Multidimensional Recurrent Neural Networks

Minhyung Cho · Chandra Dhir · Jaehyung Lee

210 C #32

Abstract:

Multidimensional recurrent neural networks (MDRNNs) have shown a remarkable performance in the area of speech and handwriting recognition. The performance of an MDRNN is improved by further increasing its depth, and the difficulty of learning the deeper network is overcome by using Hessian-free (HF) optimization. Given that connectionist temporal classification (CTC) is utilized as an objective of learning an MDRNN for sequence labeling, the non-convexity of CTC poses a problem when applying HF to the network. As a solution, a convex approximation of CTC is formulated and its relationship with the EM algorithm and the Fisher information matrix is discussed. An MDRNN up to a depth of 15 layers is successfully trained using HF, resulting in an improved performance for sequence labeling.

Live content is unavailable. Log in and register to view live content