Skip to yearly menu bar Skip to main content


Learning Large-scale Neural Fields via Context Pruned Meta-Learning

Jihoon Tack · Subin Kim · Sihyun Yu · Jaeho Lee · Jinwoo Shin · Jonathan Richard Schwarz

Great Hall & Hall B1+B2 (level 1) #1017
[ ]
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST


We introduce an efficient optimization-based meta-learning technique for large-scale neural field training by realizing significant memory savings through automated online context point selection. This is achieved by focusing each learning step on the subset of data with the highest expected immediate improvement in model quality, resulting in the almost instantaneous modeling of global structure and subsequent refinement of high-frequency details. We further improve the quality of our meta-learned initialization by introducing a bootstrap correction resulting in the minimization of any error introduced by reduced context sets while simultaneously mitigating the well-known myopia of optimization-based meta-learning. Finally, we show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields in significantly shortened optimization procedures. Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals. We provide an extensive empirical evaluation on nine datasets across multiple multiple modalities, demonstrating state-of-the-art results while providing additional insight through careful analysis of the algorithmic components constituting our method. Code is available at

Chat is not available.