`

Timezone: »

 
Poster
Gradient-EM Bayesian Meta-Learning
Yayi Zou · Xiaoqi Lu

Wed Dec 09 09:00 PM -- 11:00 PM (PST) @ Poster Session 4 #1204

Bayesian meta-learning enables robust and fast adaptation to new tasks with uncertainty assessment. The key idea behind Bayesian meta-learning is empirical Bayes inference of hierarchical model. In this work, we extend this framework to include a variety of existing methods, before proposing our variant based on gradient-EM algorithm. Our method improves computational efficiency by avoiding back-propagation computation in the meta-update step, which is exhausting for deep neural networks. Furthermore, it provides flexibility to the inner-update optimization procedure by decoupling it from meta-update. Experiments on sinusoidal regression, few-shot image classification, and policy-based reinforcement learning show that our method not only achieves better accuracy with less computation cost, but is also more robust to uncertainty.

Author Information

Yayi Zou (Didichuxing AI Labs @ Mountain View)

Didichuxing AI Labs @ Mountain View, Research Scientist WalmartLabs, Data Scientist ORIE, Cornell, Ph.D. student EECS, Peking University, undergrad Shenzhen middle school

Xiaoqi Lu (Columbia University)