Skip to yearly menu bar Skip to main content


Poster

Training Binary Neural Networks via Gaussian Variational Inference and Low-Rank Semidefinite Programming

Lorenzo Orecchia · Jiawei Hu · Xue He · Wang Mark · Xulei Yang · Min Wu · Xue Geng

East Exhibit Hall A-C #1906
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Current methods for training Binarized Neural Networks (BNNs) heavily rely on the heuristic straight-through estimator (STE), which crucially enables the application of SGD-based optimizers to the combinatorial training problem. Although the STE heuristics and their variants have led to significant improvements in BNN performance, their theoretical underpinnings remain unclear and relatively understudied. In this paper, we propose a theoretically motivated optimization framework for BNN training based on Gaussian variational inference. In its simplest form, our approach yields a non-convex linear programming formulation whose variables and associated gradients motivate the use of latent weights and STE gradients. More importantly, our framework allows us to formulate semidefinite programming (SDP) relaxations to the BNN training task. Such formulations are able to explicitly models pairwise correlations between weights during training, leading to a more accurate optimization characterization of the training problem. As the size of such formulations grows quadratically in the number of weights, quickly becoming intractable for large networks, we apply the Burer-Monteiro approach and only optimize over linear-size low-rank SDP solutions. Our empirical evaluation on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet datasets shows our method consistently outperforming all state-of-the-art algorithms for training BNNs.

Live content is unavailable. Log in and register to view live content