Timezone: »

 
Poster
Unconstrained On-line Handwriting Recognition with Recurrent Neural Networks
Alex Graves · Santiago Fernandez · Marcus Liwicki · Horst Bunke · Jürgen Schmidhuber

Wed Dec 05 10:30 AM -- 10:40 AM (PST) @

On-line handwriting recognition is unusual among sequence labelling tasks in that the underlying generator of the observed data, i.e. the movement of the pen, is recorded directly. However, the raw data can be difficult to interpret because each letter is spread over many pen locations. As a consequence, sophisticated pre-processing is required to obtain inputs suitable for conventional sequence labelling algorithms, such as HMMs. In this paper we describe a system capable of directly transcribing raw on-line handwriting data. The system consists of a recurrent neural network trained for sequence labelling, combined with a probabilistic language model. In experiments on an unconstrained on-line database, we record excellent results using either raw or pre-processed data, well outperforming a benchmark HMM in both cases.

Author Information

Alex Graves (Google DeepMind)

Main contributions to neural networks include the Connectionist Temporal Classification training algorithm (widely used for speech, handwriting and gesture recognition, e.g. by Google voice search), a type of differentiable attention for RNNs (originally for handwriting generation, now a standard tool in computer vision, machine translation and elsewhere), stochastic gradient variational inference, and Neural Turing Machines. He works at Google Deep Mind.

Santiago Fernandez (IDSIA)
Marcus Liwicki (Institute of Computer Science & Applied Math, University of Bern, Switzerland)
Horst Bunke (Institute of Computer Science & Applied Math, University of Bern, Switzerland)
Jürgen Schmidhuber (Swiss AI Lab, IDSIA (USI & SUPSI); NNAISENSE; KAUST)

Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. His lab's Deep Learning Neural Networks based on ideas published in the "Annus Mirabilis" 1990-1991 have revolutionised machine learning and AI. By the mid 2010s, they were on 3 billion devices, and used billions of times per day through users of the world's most valuable public companies, e.g., for greatly improved (CTC-LSTM-based) speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion LSTM-based translations per day), Apple's Siri and Quicktype on all iPhones, the answers of Amazon's Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets, with superhuman performance. In 2012, they had the first deep NN to win a medical imaging contest (on cancer detection). All of this attracted enormous interest from industry. His research group also established the fields of mathematically rigorous universal AI and recursive self-improvement in metalearning machines that learn to learn (since 1987). In 1990, he introduced unsupervised adversarial neural networks that fight each other in a minimax game to achieve artificial curiosity (GANs are a special case). In 1991, he introduced very deep learning through unsupervised pre-training, and neural fast weight programmers formally equivalent to what's now called linear Transformers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. He is recipient of numerous awards, author of over 350 peer-reviewed papers, and Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI. He is a frequent keynote speaker, and advising various governments on AI strategies.

More from the Same Authors