Program Highlights »
Workshop
Sat Dec 8th 08:00 AM -- 06:30 PM @ Room 513DEF
Interpretability and Robustness in Audio, Speech, and Language
Mirco Ravanelli · Dmitriy Serdyuk · Ehsan Variani · Bhuvana Ramabhadran





Workshop Home Page

Domains of natural and spoken language processing have a rich history deeply rooted in information theory, statistics, digital signal processing and machine learning. With the rapid rise of deep learning (“deep learning revolution”), many of these systematic approaches have been replaced by variants of deep neural methods, that often achieve unprecedented performance levels in many fields. With more and more of the spoken language processing pipeline being replaced by sophisticated neural layers, feature extraction, adaptation, noise robustness are learnt inherently within the network. More recently, end-to-end frameworks that learn a mapping from speech (audio) to target labels (words, phones, graphemes, sub-word units, etc.) are becoming increasingly popular across the board in speech processing in tasks ranging from speech recognition, speaker identification, language/dialect identification, multilingual speech processing, code switching, natural language processing, speech synthesis and much much more.

A key aspect behind the success of deep learning lies in the discovered low and high-level representations, that can potentially capture relevant underlying structure in the training data. In the NLP domain, for instance, researchers have mapped word and sentence embeddings to semantic and syntactic similarity and argued that the models capture latent representations of meaning. Nevertheless, some recent works on adversarial examples have shown that it is possible to easily fool a neural network (such as a speech recognizer or a speaker verification system) by just adding a small amount of specially constructed noise. Such a remarkable sensibility towards adversarial attacks highlights how superficial the discovered representations could be, rising crucial concerns on the actual robustness, security, and interpretability of modern deep neural networks. This weakness naturally leads researchers to ask very crucial questions on what these models are really learning, how we can interpret what they have learned, and how the representations provided by current neural networks can be revealed or explained in a fashion that modeling power can be enhanced further. These open questions have recently raised the interest towards interpretability of deep models, as witness by the numerous works recently published on this topic in all the major machine learning conferences. Moreover, some workshops at NIPS 2016, NIPS 2017 and Interspeech 2017 have promoted research and discussion around this important issue.
With our initiative, we wish to further foster some progresses on interpretability and robustness of modern deep learning techniques, with a particular focus on audio, speech and NLP technologies. The workshop will also analyze the connection between deep learning and models developed earlier for machine learning, linguistic analysis, signal processing, and speech recognition. This way we hope to encourage a discussion amongst experts and practitioners in these
areas with the expectation of understanding these models better and allowing to build upon the existing collective expertise.

The workshop will feature invited talks, panel discussions, as well as oral and poster contributed presentations. We welcome papers that specifically address one or more of the leading questions listed below:
1. Is there a theoretical/linguistic motivation/analysis that can explain how nets encapsulate the structure of the training data it learns from?
2. Does the visualization of this information (MDS, t-SNE) offer any insights to creating a better model?
3. How can we design more powerful networks with simpler architectures?
4. How can we can exploit adversarial examples to improve the system robustness?
5. Do alternative methods offer any complimentary modeling power to what the networks can memorize?
6. Can we explain the path of inference?
7. How do we analyze data requirements for a given model? How does multilingual data improves learning power?

08:45 AM Workshop Opening (Introduction)
Mirco Ravanelli, Dmitriy Serdyuk, Ehsan Variani, Bhuvana Ramabhadran
09:00 AM Rich Caruana, "Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning" (Talk)
Rich Caruana
09:30 AM Jason Yosinski, "Good and bad assumptions in model design and interpretability" (Talk)
Jason Yosinski
10:00 AM Brandon Carter, "Local and global model interpretability via backward selection and clustering" (Talk)
Brandon Carter
10:15 AM Andreas Krug, "Neuron Activation Profiles for Interpreting Convolutional Speech Recognition Models" (Talk)
Andreas Krug
10:30 AM Coffee break + posters 1 (Break)
Samuel Myer, Wei-Ning Hsu, Jialu Li, Monica Dinculescu, Lea Schönherr, Ehsan Hosseini-Asl, Skyler Seto, Oiwi Parker Jones, Imran Sheikh, Thomas Manzini, Yonatan Belinkov, Nadir Durrani, Alexander Amini, Johanna Hansen, Gabi Shalev, Jay Shin, Paul Smolensky, Lisa Fan, Zining Zhu, Hamid Eghbalzadeh, Ben Baer, Abelino Jimenez, João Felipe Santos, Jan Kremer, Erik McDermott, Andreas Krug, Tzeviya S Fuchs, Shuai Tang, Brandon Carter, David Gifford, Albert Zeyer, André Merboldt, Krishna Pillutla, Katherine Lee, Titouan Parcollet, Orhan Firat, Gautamb85 Bhattacharya, JAHANGIR ALAM, Mirco Ravanelli
11:00 AM Hynek Hermansky, "Learning - not just for machines anymore" (Talk)
Hynek Hermansky
11:30 AM Michiel Bacchiani, "Observations in Joint Learning of Features and Classifiers for Speech and Language" (Talk)
Michiel Bacchiani
12:00 PM Mirco Ravanelli, "Interpretable convolutional filters with SincNet" (Talk)
Mirco Ravanelli
12:15 PM Hamid Eghbal-zadeh, "Deep Within-Class Covariance Analysis for Robust Deep Audio Representation Learning" (Talk)
Hamid Eghbalzadeh
12:30 PM Lunch Break (Break)
01:30 PM Ralf Schlüter, "Automatic Speech Recognition Architectures: from HMM to End-to-End Modeling" (Talk)
Ralf Schlüter
02:00 PM Erik McDermott, "A Deep Generative Acoustic Model for Compositional Automatic Speech Recognition" (Talk)
Erik McDermott
02:15 PM Jamin Shin, "Interpreting Word Embeddings with Eigenvector Analysis" (Talk)
Jay Shin
02:30 PM Jan Kremer, "On the Inductive Bias of Word-Character-Level Multi-Task Learning for Speech Recognition" (Talk)
Jan Kremer
02:45 PM Coffee break + posters 2 (Break)
Jan Kremer, Erik McDermott, Brandon Carter, Albert Zeyer, Andreas Krug, Paul Pu Liang, Katherine Lee, Dominika Basaj, Abelino Jimenez, Lisa Fan, Gautamb85 Bhattacharya, Tzeviya S Fuchs, David Gifford, Loren Lugosch, Orhan Firat, Ben Baer, JAHANGIR ALAM, Jay Shin, Mirco Ravanelli, Paul Smolensky, Zining Zhu, Hamid Eghbalzadeh, Skyler Seto, Imran Sheikh, João Felipe Santos, Yonatan Belinkov, Nadir Durrani, Oiwi Parker Jones, Shuai Tang, André Merboldt, Titouan Parcollet, Wei-Ning Hsu, Krishna Pillutla, Ehsan Hosseini-Asl, Monica Dinculescu, Alexander Amini, Ying Zhang, Taoli Cheng, Alain Tapp
03:30 PM Mike Schuster, "Learning from the move to neural machine translation at Google" (Talk)
Mike Schuster
04:00 PM Alexander Rush, "Interprebility in Text Generation" (Talk)
Alexander Rush
04:30 PM Shuai Tang, "Learning Distributed Representations of Symbolic Structure Using Binding and Unbinding Operations" (Talk)
Shuai Tang
04:45 PM Paul Pu Liang, "Learning Robust Joint Representations for Multimodal Sentiment Analysis" (Talk)
Jay Shin
05:00 PM Jason Eisner, "BiLSTM-FSTs and Neural FSTs" (Talk)
Jason Eisner
05:30 PM Panel Discussion (Panel)
Rich Caruana, Mike Schuster, Ralf Schlüter, Hynek Hermansky, Renato De Mori, Samy Bengio, Michiel Bacchiani, Jason Eisner