Timezone: »
Poster
R-Drop: Regularized Dropout for Neural Networks
xiaobo liang · Lijun Wu · Juntao Li · Yue Wang · Qi Meng · Tao Qin · Wei Chen · Min Zhang · Tie-Yan Liu
Dropout is a powerful and widely used technique to regularize the training of deep neural networks. Though effective and performing well, the randomness introduced by dropout causes unnegligible inconsistency between training and inference. In this paper, we introduce a simple consistency training strategy to regularize dropout, namely R-Drop, which forces the output distributions of different sub models generated by dropout to be consistent with each other. Specifically, for each training sample, R-Drop minimizes the bidirectional KL-divergence between the output distributions of two sub models sampled by dropout. Theoretical analysis reveals that R-Drop reduces the above inconsistency. Experiments on $\bf{5}$ widely used deep learning tasks ($\bf{18}$ datasets in total), including neural machine translation, abstractive summarization, language understanding, language modeling, and image classification, show that R-Drop is universally effective. In particular, it yields substantial improvements when applied to fine-tune large-scale pre-trained models, e.g., ViT, RoBERTa-large, and BART, and achieves state-of-the-art (SOTA) performances with the vanilla Transformer model on WMT14 English$\to$German translation ($\bf{30.91}$ BLEU) and WMT14 English$\to$French translation ($\bf{43.95}$ BLEU), even surpassing models trained with extra large-scale data and expert-designed advanced variants of Transformer models. Our code is available at GitHub\footnote{\url{https://github.com/dropreg/R-Drop}}.
Author Information
xiaobo liang (Soochow University, China)
Lijun Wu (Sun Yat-sen University)
Juntao Li (Soochow University, China)
Yue Wang (Soochow University, China)
Qi Meng (Microsoft)
Tao Qin (Microsoft Research)
Wei Chen (Chinese Academy of Sciences)
Min Zhang
Tie-Yan Liu (Microsoft Research Asia)
More from the Same Authors
-
2021 : AI X Science »
Tie-Yan Liu -
2021 Poster: On the Generative Utility of Cyclic Conditionals »
Chang Liu · Haoyue Tang · Tao Qin · Jintao Wang · Tie-Yan Liu -
2021 Poster: Curriculum Offline Imitating Learning »
Minghuan Liu · Hanye Zhao · Zhengyu Yang · Jian Shen · Weinan Zhang · Li Zhao · Tie-Yan Liu -
2021 Poster: Speech-T: Transducer for Text to Speech and Beyond »
Jiawei Chen · Xu Tan · Yichong Leng · Jin Xu · Guihua Wen · Tao Qin · Tie-Yan Liu -
2021 Poster: Stylized Dialogue Generation with Multi-Pass Dual Learning »
Jinpeng Li · Yingce Xia · Rui Yan · Hongda Sun · Dongyan Zhao · Tie-Yan Liu -
2021 Poster: Distributional Reinforcement Learning for Multi-Dimensional Reward Functions »
Pushi Zhang · Xiaoyu Chen · Li Zhao · Wei Xiong · Tao Qin · Tie-Yan Liu -
2021 Poster: Optimizing Information-theoretical Generalization Bound via Anisotropic Noise of SGLD »
Bohan Wang · Huishuai Zhang · Jieyu Zhang · Qi Meng · Wei Chen · Tie-Yan Liu -
2021 Poster: Co-evolution Transformer for Protein Contact Prediction »
He Zhang · Fusong Ju · Jianwei Zhu · Liang He · Bin Shao · Nanning Zheng · Tie-Yan Liu -
2021 Poster: Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding »
Shengjie Luo · Shanda Li · Tianle Cai · Di He · Dinglan Peng · Shuxin Zheng · Guolin Ke · Liwei Wang · Tie-Yan Liu -
2021 Poster: Learning Causal Semantic Representation for Out-of-Distribution Prediction »
Chang Liu · Xinwei Sun · Jindong Wang · Haoyue Tang · Tao Li · Tao Qin · Wei Chen · Tie-Yan Liu -
2021 Poster: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning »
Jongjin Park · Younggyo Seo · Chang Liu · Li Zhao · Tao Qin · Jinwoo Shin · Tie-Yan Liu -
2021 Poster: FastCorrect: Fast Error Correction with Edit Alignment for Automatic Speech Recognition »
Yichong Leng · Xu Tan · Linchen Zhu · Jin Xu · Renqian Luo · Linquan Liu · Tao Qin · Xiangyang Li · Edward Lin · Tie-Yan Liu -
2021 Poster: Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering »
Weijiang Yu · Haoteng Zheng · Mengfei Li · Lei Ji · Lijun Wu · Nong Xiao · Nan Duan -
2021 Poster: Do Transformers Really Perform Badly for Graph Representation? »
Chengxuan Ying · Tianle Cai · Shengjie Luo · Shuxin Zheng · Guolin Ke · Di He · Yanming Shen · Tie-Yan Liu -
2021 Poster: Recovering Latent Causal Factor for Generalization to Distributional Shifts »
Xinwei Sun · Botong Wu · Xiangyu Zheng · Chang Liu · Wei Chen · Tao Qin · Tie-Yan Liu -
2020 Poster: Semi-Supervised Neural Architecture Search »
Renqian Luo · Xu Tan · Rui Wang · Tao Qin · Enhong Chen · Tie-Yan Liu -
2020 Poster: Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection »
Xiang Li · Wenhai Wang · Lijun Wu · Shuo Chen · Xiaolin Hu · Jun Li · Jinhui Tang · Jian Yang -
2020 Poster: RD$^2$: Reward Decomposition with Representation Decomposition »
Zichuan Lin · Derek Yang · Li Zhao · Tao Qin · Guangwen Yang · Tie-Yan Liu -
2020 Poster: MPNet: Masked and Permuted Pre-training for Language Understanding »
Kaitao Song · Xu Tan · Tao Qin · Jianfeng Lu · Tie-Yan Liu -
2019 Poster: Neural Machine Translation with Soft Prototype »
Yiren Wang · Yingce Xia · Fei Tian · Fei Gao · Tao Qin · Cheng Xiang Zhai · Tie-Yan Liu -
2019 Poster: FastSpeech: Fast, Robust and Controllable Text to Speech »
Yi Ren · Yangjun Ruan · Xu Tan · Tao Qin · Sheng Zhao · Zhou Zhao · Tie-Yan Liu -
2019 Poster: Fully Parameterized Quantile Function for Distributional Reinforcement Learning »
Derek Yang · Li Zhao · Zichuan Lin · Tao Qin · Jiang Bian · Tie-Yan Liu -
2019 Poster: Distributional Reward Decomposition for Reinforcement Learning »
Zichuan Lin · Li Zhao · Derek Yang · Tao Qin · Tie-Yan Liu · Guangwen Yang -
2019 Poster: Normalization Helps Training of Quantized LSTM »
Lu Hou · Jinhua Zhu · James Kwok · Fei Gao · Tao Qin · Tie-Yan Liu -
2018 Poster: Neural Architecture Optimization »
Renqian Luo · Fei Tian · Tao Qin · Enhong Chen · Tie-Yan Liu -
2018 Poster: Learning to Teach with Dynamic Loss Functions »
Lijun Wu · Fei Tian · Yingce Xia · Yang Fan · Tao Qin · Lai Jian-Huang · Tie-Yan Liu -
2018 Poster: Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation »
Tianyu He · Xu Tan · Yingce Xia · Di He · Tao Qin · Zhibo Chen · Tie-Yan Liu -
2018 Poster: FRAGE: Frequency-Agnostic Word Representation »
Chengyue Gong · Di He · Xu Tan · Tao Qin · Liwei Wang · Tie-Yan Liu -
2017 Poster: Decoding with Value Networks for Neural Machine Translation »
Di He · Hanqing Lu · Yingce Xia · Tao Qin · Liwei Wang · Tie-Yan Liu -
2017 Poster: Finite sample analysis of the GTD Policy Evaluation Algorithms in Markov Setting »
Yue Wang · Wei Chen · Yuting Liu · Zhi-Ming Ma · Tie-Yan Liu -
2017 Poster: Deliberation Networks: Sequence Generation Beyond One-Pass Decoding »
Yingce Xia · Fei Tian · Lijun Wu · Jianxin Lin · Tao Qin · Nenghai Yu · Tie-Yan Liu -
2017 Poster: LightGBM: A Highly Efficient Gradient Boosting Decision Tree »
Guolin Ke · Qi Meng · Thomas Finley · Taifeng Wang · Wei Chen · Weidong Ma · Qiwei Ye · Tie-Yan Liu -
2016 Poster: Dual Learning for Machine Translation »
Di He · Yingce Xia · Tao Qin · Liwei Wang · Nenghai Yu · Tie-Yan Liu · Wei-Ying Ma -
2016 Poster: LightRNN: Memory and Computation-Efficient Recurrent Neural Networks »
Xiang Li · Tao Qin · Jian Yang · Xiaolin Hu · Tie-Yan Liu -
2013 Poster: Estimation Bias in Multi-Armed Bandit Algorithms for Search Advertising »
Min Xu · Tao Qin · Tie-Yan Liu -
2010 Workshop: Machine Learning in Online Advertising »
James G Shanahan · Deepak Agarwal · Tao Qin · Tie-Yan Liu -
2010 Poster: A New Probabilistic Model for Rank Aggregation »
Tao Qin · Xiubo Geng · Tie-Yan Liu -
2008 Poster: Global Ranking Using Continuous Conditional Random Fields »
Tao Qin · Tie-Yan Liu · Xu-Dong Zhang · De-Sheng Wang · Hang Li -
2008 Oral: Global Ranking Using Continuous Conditional Random Fields »
Tao Qin · Tie-Yan Liu · Xu-Dong Zhang · De-Sheng Wang · Hang Li