`

( events)   Timezone: »  
Workshop
Mon Dec 13 05:00 AM -- 05:00 PM (PST)
Efficient Natural Language and Speech Processing (Models, Training, and Inference)
Mehdi Rezaghoizadeh · Lili Mou · Yue Dong · Pascal Poupart · Ali Ghodsi · Qun Liu





This workshop aims at introducing some fundamental problems in the field of natural language and speech processing which can be of interest to the general machine learning and deep learning community to improve the efficiency of the models, their training and inference. The workshop program offers an interactive platform for gathering experts and talents from academia and industry through different invited keynote talks, panel discussions, paper submissions, reviews, posters, oral presentations and a mentorship program.
This will provide an opportunity to discuss and learn from each other, exchange ideas, build connections, and brainstorm on potential solutions and future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Call for Papers
We encourage the NeurIPS community to submit their solutions, ideas, and ongoing work concerning data, model, training, and inference efficiency for NLP and speech processing. The scope of this workshop includes, but not limited to, the following topics.
(For more details please visit the Workshop Homepage.)

- Efficient Pre-Training and Fine-Tuning
- Model Compression
- Efficient Training
- Data Efficiency
- Edge Intelligence

Important Dates:
- Submission Deadline: September 18, 2021 (AOE)
- Acceptance Notification: October 22, 2021
- Camera-Ready Submission: November 1, 2021
- Workshop Date: December 13, 2021

Opening Speech (Opening)
Continual Learning in Large-Scale Pre-Training (Keynote Talk)
Efficient Multi-lingual Neural Machine Translation (Keynote Talk)
Compression and Acceleration of Pre-trained Language Models (Keynote Talk)
Break
Summarization in Quantized Transformer Spaces (Keynote Talk)
Data-Efficient Cross-Lingual Natural Language Processing (Keynote Talk)
From model compression to self-distillation: a review (Keynote Talk)
Poster Session 1 (Poster Session)
Lunch Break (Break)
Opening of the Afternoon Session (Opening)
A versatile and efficient approach to summarize speech into utterance-level representations (Spotlight)
Towards Zero and Few-shot Knowledge-seeking Turn Detection in Task-orientated Dialogue Systems (Spotlight)
Consistent Accelerated Inference via Confident Adaptive Transformers (Spotlight)
CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models (Spotlight)
Communication-Efficient Federated Learning for Neural Machine Translation (Spotlight)
Dynamic-TinyBERT: Further Enhance the Inference Efficiency of TinyBERT by Dynamic Sequence Length (Spotlight)
Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models (Spotlight)
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators (Spotlight)
How to Win LMs and Influence Predictions: Using Short Phrases to Control NLP Models (Keynote Talk)
Benchmarks for Multi-objective Hyperparameter Optimization (Keynote Talk)
NLP with Synthetic Text (Keynote Talk)
Break
Toward Efficient Training of Large Language Models with Balanced Conditional Compute (Keynote Talk)
Why We Want Contrastive Learning in Language Models (Keynote Talk)
Battling with Larger Models through Grounding and Searching (Keynote Talk)
Break
Panel Discussion
Best Papers and Closing Remarks (Closing)
Poster Session II (Poster Session)
A Short Study on Compressing Decoder-Based Language Models (Poster)
Dynamic-TinyBERT: Further Enhance the Inference Efficiency of TinyBERT by Dynamic Sequence Length (Poster)
Towards Textual Out-of-Domain Detection without any In-Domain Labels (Poster)
Towards efficient end-to-end speech recognition with biologically-inspired neural networks (Poster)
CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models (Poster)
Prune Once for All: Sparse Pre-Trained Language Models (Poster)
Efficient Variational Graph Autoencoders for Unsupervised Cross-domain Prerequisite Chains (Poster)
Undivided Attention: Are Intermediate Layers Necessary for BERT? (Poster)
Kronecker Decomposition for GPT Compression (Poster)
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators (Poster)
Adaptive Fine-tuning for Vision and Language Pre-trained Models (Poster)
Continual Few-Shot Learning for Named Entity Recognition (Poster)
Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models (Poster)
Towards Zero and Few-shot Knowledge-seeking Turn Detection in Task-orientated Dialogue Systems (Poster)
Efficient Strategies of Few-Shot On-Device Voice Cloning (Poster)
Adversarial Conversational Shaping for Intelligent Agents (Poster)
A versatile and efficient approach to summarize speech into utterance-level representations (Poster)
Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning (Poster)
Evaluating robustness of You Only Hear Once(YOHO) Algorithm on noisy audios in the VOICe Dataset (Poster)
Consistent Accelerated Inference via Confident Adaptive Transformers (Poster)
Pruning Encoders with a Multitask Objective (Poster)
Communication-Efficient Federated Learning for Neural Machine Translation (Poster)
Unsupervised Domain Adaptation with Adapter (Poster)
Compressing Pre-trained Language Models using Progressive Low Rank Decomposition (Poster)
Towards Continual Entity Learning in LanguageModels for Conversational Agents (Poster)
User-in-the-Loop Named Entity Recognition via Counterfactual Learning (Poster)