Skip to yearly menu bar Skip to main content


Poster

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

Wenhui Wang ⋅ Furu Wei ⋅ Li Dong ⋅ Hangbo Bao ⋅ Nan Yang ⋅ Ming Zhou
2020 Poster

Abstract

Video

Chat is not available.