( events)   Timezone: »  
Workshop
Fri Dec 02 05:30 AM -- 04:00 PM (PST) @ Ballroom C None
Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)
Mehdi Rezagholizadeh · Peyman Passban · Yue Dong · Lili Mou · Pascal Poupart · Ali Ghodsi · Qun Liu





The second version of the Efficient Natural Language and Speech Processing (ENLSP-II) workshop focuses on fundamental and challenging problems to make natural language and speech processing (especially pre-trained models) more efficient in terms of Data, Model, Training, and Inference. The workshop program offers an interactive platform for gathering different experts and talents from academia and industry through invited talks, panel discussion, paper submissions, reviews, interactive
posters, oral presentations and a mentorship program. This will be a unique opportunity to address the efficiency issues of current models, build connections, exchange ideas and brainstorm solutions, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Breakfast
Opening Remarks (Opening)
Fine-grained Interactive Vision Language Pre-training (KeyNote Talk)
​Efficiency Tradeoffs in the Design of Neural Search Systems (KeyNote Talk)
Last Advances in End-to-End Speech Recognition (KeyNote Talk)
Collective Knowledge Graph Completion with Mutual Knowledge Distillation (Spotlight)
Attribute Controlled Dialogue Prompting (Spotlight)
Fast DistilBERT on CPUs (Spotlight)
Morning Break and Poster Session 1 (Break and Poster Session)
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models (KeyNote Talk)
Building Language Models Based on Retrieval (KeyNote Talk)
Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training (KeyNote Talk)
Efficient Few-Shot Learning Without Prompts (Spotlight)
PCFG-based Natural Language Interface Improves Generalization for Controlled Text Generation (Spotlight)
PromptDA: Label-guided Data Augmentation for Prompt-based Few Shot Learners (Spotlight)
Lunch Break and Virtual Poster Session (Break)
Efficient Identify Event Causality with Knowledge and Analogy (KeyNote Talk)
Interactive Industrial Panel (Discussion Panel)
Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement (Spotlight)
Gradient Knowledge Distillation for Pre-trained Language Models (Spotlight)
Break and Poster Session II (Break and Poster Session)
Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval (KeyNote Talk)
Do we still need inductive biases after Transformer language models? (KeyNote Talk)
8-bit Methods for Efficient Deep Learning (KeyNote Talk)
Efficient Controllable Generative Models for Music and Performance Synthesis (KeyNote Talk)
Best Paper and Poster Awards (Closing remark)
Can we get smarter than majority vote? Efficient use of individual rater’s labels for content moderation (Poster)
INT8 Transformers for Inference Acceleration (Poster)
Graph Masking Pre-training for Graph-to-Text Generation (Poster)
A Theory of Unsupervised Translation for Understanding Animal Communication (Poster)
On Spectral and Temporal Feature Encoding Behaviour in Stacked Architectures (Poster)
Towards Data Efficient And Robust Speech Representation Model Distillation (Poster)
Few-Shot Aspect Extraction using Prompt Training (Poster)
The Ineffectiveness of TKGE Models in Encoding Real-World Knowledge Graphs (Poster)
BudgetLongformer: Can we Cheaply Pretrain a SOTA Legal Language Model From Scratch? (Poster)
Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning (Poster)
DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low Rank Adaptation (Poster)
BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning (Poster)
Attribute Controlled Dialogue Prompting (Spotlight)
ContextNER: Contextual Phrase Generation at Scale (Poster)
Depth-Wise Attention (DWAtt): A Layer Fusion Method for Data-Efficient Classification (Poster)
Using Informative Data Subsets for Efficient Training of Large Language Models: An Initial Study (Poster)
An efficient RNN Language Model using activity sparsity and sparse back-propagation through time (Poster)
Fast DistilBERT on CPUs (Spotlight)
PCFG-based Natural Language Interface Improves Generalization for Controlled Text Generation (Spotlight)
Collective Knowledge Graph Completion with Mutual Knowledge Distillation (Poster)
PromptDA: Label-guided Data Augmentation for Prompt-based Few Shot Learners (Spotlight)
Gradient Knowledge Distillation for Pre-trained Language Models (Poster)
Parameter-Efficient Finetuning of Transformers for Source Code (Poster)
On the impact of the quality of pseudo-labels on the self-supervised speaker verification task (Poster)
Improved Knowledge Distillation by Utilizing Backward Pass Knowledge in Neural Networks (Poster)
An Exploration of Methods for Zero-shot Transfer in Small Language Models (Poster)
Topic Segmentation in the Wild: Towards Segmentation of Semi-structured & Unstructured Chats (Poster)
Efficient Few-Shot Learning Without Prompts (Poster)
TBD7 (KeyNote Talk)
Dynamic Query Representation for Extractive Question Answering (Poster)
Efficient Speech Translation with Pre-trained models (Poster)
Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic (Poster)
Pyramid Dynamic Inference: Encouraging Faster Inference via Early Exit Boosting (Poster)
Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement (Poster)
PEST: Combining Parameter-Efficient Fine-Tuning with Self-Training and Co-Training (Poster)
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks (Poster)
Pre-Training a Graph Recurrent Network for Language Representation (Poster)
QuaLA-MiniLM: a Quantized Length Adaptive MiniLM (Poster)
SymbolicGPT: A Generative Transformer Model for Symbolic Regression (Poster)
Using Selective Masking as a Bridge between Pre-training and Fine-tuning (Poster)
Strategies for Applying Low Rank Decomposition to Transformer-Based Models (Poster)