Skip to yearly menu bar Skip to main content


Poster

Learning Optimal Lattice Vector Quantizers for End-to-end DNN Image Compression

Xi Zhang · Xiaolin Wu

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

It is customary to deploy uniform scalar quantization in the end-to-end optimized DNN image compression methods, instead of more powerful vector quantization, due to the high complexity of the latter. Lattice vector quantization (LVQ), on the other hand, presents a compelling alternative, which can exploit inter-feature dependencies more effectively while keeping computational efficiency almost the same as scalar quantization. However, traditional LVQ structures are designed/optimized for uniform source distributions, hence nonadaptive and suboptimal for real source distributions of latent code space for DNN image compression tasks. In this paper, we propose a novel learning method to overcome this weakness by designing the rate-distortion optimal LVQ codebooks with respect to the sample statistics of the latent features to be compressed. By being able to better fit the LVQ structures to any given latent sample distribution, the proposed LVQ method improves the rate-distortion performances of the existing quantization schemes in DNN signal compression significantly, while retaining the amenability of uniform scalar quantization.

Live content is unavailable. Log in and register to view live content