Timezone: »

 
Poster
Learning Expressive Meta-Representations with Mixture of Expert Neural Processes
Qi Wang · Herke van Hoof

@

Neural processes (NPs) formulate exchangeable stochastic processes and are promising models for meta learning that do not require gradient updates during the testing phase. However, most NP variants place a strong emphasis on a global latent variable. This weakens the approximation power and restricts the scope of applications using NP variants, especially when data generative processes are complicated.To resolve these issues, we propose to combine the Mixture of Expert models with Neural Processes to develop more expressive exchangeable stochastic processes, referred to as Mixture of Expert Neural Processes (MoE-NPs). Then we apply MoE-NPs to both few-shot supervised learning and meta reinforcement learning tasks. Empirical results demonstrate MoE-NPs' strong generalization capability to unseen tasks in these benchmarks.

Author Information

Qi Wang (Amsterdam Machine Learning Lab)

I am a Ph.D. student with Dr. Herke van Hoof and Prof. Max Welling.

Herke van Hoof (University of Amsterdam)

More from the Same Authors