Timezone: »

 
Poster
On the Representation Collapse of Sparse Mixture of Experts
Zewen Chi · Li Dong · Shaohan Huang · Damai Dai · Shuming Ma · Barun Patra · Saksham Singhal · Payal Bajaj · XIA SONG · Xian-Ling Mao · Heyan Huang · Furu Wei

@

Sparse mixture of experts provides larger model capacity while requiring a constant computational overhead. It employs the routing mechanism to distribute input tokens to the best-matched experts according to their hidden representations. However, learning such a routing mechanism encourages token clustering around expert centroids, implying a trend toward representation collapse. In this work, we propose to estimate the routing scores between tokens and experts on a low-dimensional hypersphere. We conduct extensive experiments on cross-lingual language model pre-training and fine-tuning on downstream tasks. Experimental results across seven multilingual benchmarks show that our method achieves consistent gains. We also present a comprehensive analysis on the representation and routing behaviors of our models. Our method alleviates the representation collapse issue and achieves more consistent routing than the baseline mixture-of-experts methods.

Author Information

Zewen Chi (Beijing Institute of Technology)
Li Dong (Microsoft Research)
Shaohan Huang (Microsoft)
Damai Dai (Peking University)
Shuming Ma (Peking University)
Barun Patra (Microsoft)
Saksham Singhal (Microsoft)
Payal Bajaj (Microsoft)
XIA SONG (Microsoft)
Xian-Ling Mao (Beijing Institute of Technology)
Heyan Huang (Beijing Institute of Technology)
Furu Wei (Microsoft Research Asia)

More from the Same Authors