Timezone: »

 
Poster
Mixture-of-Experts with Expert Choice Routing
Yanqi Zhou · Tao Lei · Hanxiao Liu · Nan Du · Yanping Huang · Vincent Zhao · Andrew Dai · zhifeng Chen · Quoc V Le · James Laudon

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #732

Sparsely-activated Mixture-of-experts (MoE) models allow the number of parameters to greatly increase while keeping the amount of computation for a given token or a given sample unchanged. However, a poor expert routing strategy (e.g. one resulting in load imbalance) can cause certain experts to be under-trained, leading to an expert being under or over-specialized. Prior work allocates a fixed number of experts to each token using a top-k function regardless of the relative importance of different tokens. To address this, we propose a heterogeneous mixture-of-experts employing an expert choice method. Instead of letting tokens select the top-k experts, we have experts selecting the top-k tokens. As a result, each token can be routed to a variable number of experts and each expert can have a fixed bucket size. We systematically study pre-training speedups using the same computational resources of the Switch Transformer top-1 and GShard top-2 gating of prior work and find that our method improves training convergence time by more than 2×. For the same computational cost, our method demonstrates higher performance in fine-tuning 11 selected tasks in the GLUE and SuperGLUE benchmarks. For a smaller activation cost, our method outperforms the T5 dense model in 7 out of the 11 tasks.

Author Information

Yanqi Zhou (Google Brain)
Tao Lei (MIT)
Hanxiao Liu (Google Brain)
Nan Du (Google Brain)
Yanping Huang (Google Brain)
Vincent Zhao (Google Research, Brain)
Andrew Dai (Google)
zhifeng Chen (Google Brain)
Quoc V Le (Google)
James Laudon (Google)

More from the Same Authors