`

( events)   Timezone: »  
Workshop
Mon Dec 13 05:00 AM -- 05:00 PM (PST)
Efficient Natural Language and Speech Processing (Models, Training, and Inference)
Mehdi Rezagholizadeh · Lili Mou · Yue Dong · Pascal Poupart · Ali Ghodsi · Qun Liu





Workshop Home Page

This workshop aims at introducing some fundamental problems in the field of natural language and speech processing which can be of interest to the general machine learning and deep learning community to improve the efficiency of the models, their training and inference. The workshop program offers an interactive platform for gathering experts and talents from academia and industry through different invited keynote talks, panel discussions, paper submissions, reviews, posters, oral presentations and a mentorship program.
This will provide an opportunity to discuss and learn from each other, exchange ideas, build connections, and brainstorm on potential solutions and future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Call for Papers
We encourage the NeurIPS community to submit their solutions, ideas, and ongoing work concerning data, model, training, and inference efficiency for NLP and speech processing. The scope of this workshop includes, but not limited to, the following topics.
(For more details please visit the Workshop Homepage.)

- Efficient Pre-Training and Fine-Tuning
- Model Compression
- Efficient Training
- Data Efficiency
- Edge Intelligence

Important Dates:
- Submission Deadline: September 18, 2021 (AOE)
- Acceptance Notification: October 22, 2021
- Camera-Ready Submission: November 1, 2021
- Workshop Date: December 13, 2021

Opening Speech (Opening)
KeyNote1 (Keynote Talk)
KeyNote2 (Keynote Talk)
KeyNote3 (Keynote Talk)
Break
KeyNote4 (Keynote Talk)
KeyNote5 (Keynote Talk)
KeyNote6 (Keynote Talk)
Lunch Break (Break)
Poster Session 1 (Poster Session)
A versatile and efficient approach to summarize speech into utterance-level representations (Spotlight)
Towards Zero and Few-shot Knowledge-seeking Turn Detection in Task-orientated Dialogue Systems (Spotlight)
Consistent Accelerated Inference via Confident Adaptive Transformers (Spotlight)
CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models (Spotlight)
Communication-Efficient Federated Learning for Neural Machine Translation (Spotlight)
Dynamic-TinyBERT: Further Enhance the Inference Efficiency of TinyBERT by Dynamic Sequence Length (Spotlight)
Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models (Spotlight)
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators (Spotlight)
KeyNote7 (Keynote Talk)
KeyNote8 (Keynote Talk)
KeyNote9 (Keynote Talk)
Break
KeyNote10 (Keynote Talk)
KeyNote11 (Keynote Talk)
KeyNote12 (Keynote Talk)
Break
Panel Discussion
Best Papers and Closing Remarks (Closing)
Poster Session II (Poster Session)
Pruning Encoders with a Multitask Objective (Poster)
Undivided Attention: Are Intermediate Layers Necessary for BERT? (Poster)
Prune Once for All: Sparse Pre-Trained Language Models (Poster)
Dynamic-TinyBERT: Further Enhance the Inference Efficiency of TinyBERT by Dynamic Sequence Length (Poster)
Compressing Pre-trained Language Models using Progressive Low Rank Decomposition (Poster)
Towards Continual Entity Learning in LanguageModels for Conversational Agents (Poster)
Efficient Strategies of Few-Shot On-Device Voice Cloning (Poster)
Unsupervised Domain Adaptation with Adapter (Poster)
Towards Textual Out-of-Domain Detection without any In-Domain Labels (Poster)
Kronecker Decomposition for GPT Compression (Poster)
Continual Few-Shot Learning for Named Entity Recognition (Poster)
Evaluating robustness of You Only Hear Once(YOHO) Algorithm on noisy audios in the VOICe Dataset (Poster)
A Short Study on Compressing Decoder-Based Language Models (Poster)
Adversarial Conversational Shaping for Intelligent Agents (Poster)
A versatile and efficient approach to summarize speech into utterance-level representations (Poster)
User-in-the-Loop Named Entity Recognition via Counterfactual Learning (Poster)
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators (Poster)
Towards Zero and Few-shot Knowledge-seeking Turn Detection in Task-orientated Dialogue Systems (Poster)
Consistent Accelerated Inference via Confident Adaptive Transformers (Poster)
Towards efficient end-to-end speech recognition with biologically-inspired neural networks (Poster)
Efficient Variational Graph Autoencoders for Unsupervised Cross-domain Prerequisite Chains (Poster)
CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models (Poster)
Communication-Efficient Federated Learning for Neural Machine Translation (Poster)
Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning (Poster)
Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models (Poster)
Adaptive Fine-tuning for Vision and Language Pre-trained Models (Poster)