Poster
Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization
James Oldfield · Markos Georgopoulos · Grigorios Chrysos · Christos Tzelepis · Yannis Panagakis · Mihalis Nicolaou · Jiankang Deng · Ioannis Patras
East Exhibit Hall A-C #3003
The Mixture of Experts (MoE) paradigm provides a powerful way to decompose dense layers into smaller, modular computations often more amenable to human interpretation, debugging, and editability. However, a major challenge lies in the computational cost of scaling the number of experts high enough to achieve fine-grained specialization. In this paper, we propose the Multilinear Mixture of Experts (muMoE) layer to address this, focusing on vision models. muMoE layers enable scalable expert specialization by performing an implicit computation on prohibitively large weight tensors entirely in factorized form. Consequently, muMoEs (1) avoid the restrictively high inference-time costs of 'soft' MoEs, yet (2) do not inherit the training issues of the popular 'sparse' MoEs' discrete (non-differentiable) expert routing. We present both qualitative and quantitative evidence that scaling muMoE layers when fine-tuning foundation models for vision tasks leads to more specialized experts at the class-level, further enabling manual bias correction in CelebA attribute classification. Finally, we show qualitative results demonstrating the expert specialism achieved when pre-training large GPT2 and MLP-Mixer models with parameter-matched muMoE blocks at every layer, maintaining comparable accuracy. Our anonymous code and full results are included at: https://anonymous.4open.science/r/muMoE-7A43.
Live content is unavailable. Log in and register to view live content