Skip to yearly menu bar Skip to main content


Poster

B$\oplus$LD: Boolean Logic Deep Learning

Van Minh NGUYEN · Cristian Ocampo-Blandon · Aymen Askri · Louis Leconte · Ba-Hien Tran

East Exhibit Hall A-C #1901
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Computational intensiveness of deep learning has motivated low-precision arithmetic designs. However, the current quantized/binarized training approaches are limited by: (1) significant performance loss due to arbitrary approximations of the latent weight gradient through its discretization/binarization function, and (2) training computational intensiveness due to the reliance on full-precision latent weights. This paper proposes a novel mathematical principle by introducing the notion of Boolean variation such that neurons made of Boolean weights and/or activations can be trained ---for the first time--- natively in Boolean domain instead of latent-weight gradient descent and real arithmetic. We explore its convergence, conduct extensively experimental benchmarking, and provide consistent complexity evaluation by considering chip architecture, memory hierarchy, dataflow, and arithmetic precision. Our approach achieves baseline full-precision accuracy in ImageNet classification and surpasses state-of-the-art results in semantic segmentation, with notable performance in image super-resolution, and natural language understanding with transformer-based models. Moreover, it significantly reduces energy consumption during both training and inference.

Live content is unavailable. Log in and register to view live content