Timezone: »

Efficient Learning of Generative Models via Finite-Difference Score Matching
Tianyu Pang · Kun Xu · Chongxuan LI · Yang Song · Stefano Ermon · Jun Zhu

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1362

Several machine learning applications involve the optimization of higher-order derivatives (e.g., gradients of gradients) during training, which can be expensive with respect to memory and computation even with automatic differentiation. As a typical example in generative modeling, score matching~(SM) involves the optimization of the trace of a Hessian. To improve computing efficiency, we rewrite the SM objective and its variants in terms of directional derivatives, and present a generic strategy to efficiently approximate any-order directional derivative with finite difference~(FD). Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations. Thus, it reduces the total computational cost while also improving numerical stability. We provide two instantiations by reformulating variants of SM objectives into the FD forms. Empirically, we demonstrate that our methods produce results comparable to the gradient-based counterparts while being much more computationally efficient.

Author Information

Tianyu Pang (Tsinghua University)
Kun Xu (Tsinghua University)
Chongxuan LI (Tsinghua University)

Assistant Professor @ RUC

Yang Song (Stanford University)
Stefano Ermon (Stanford)
Jun Zhu (Tsinghua University)

More from the Same Authors