Skip to yearly menu bar Skip to main content


Spotlight Poster

QTIP: Quantization with Trellises and Incoherence Processing

Albert Tseng · Qingyao Sun · David Hou · Christopher De Sa

East Exhibit Hall A-C #3407
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Post-training quantization (PTQ) reduces the memory footprint of LLMs by quantizing weights to low-precision datatypes.Since LLM inference is usually memory-bandwidth-bound, it can be accelerated by PTQ methods.Recent state-of-the-art PTQ approaches have converged on using vector quantization (VQ) to quantize multiple weights at once, which improves information utilization from better shaping.However, VQ requires a codebook with size exponential in the dimension.This limits current VQ-based PTQ works to low VQ dimensions ($\le 8$) that in turn limit quantization quality.Here, we introduce QTIP, which instead uses trellis coded quantization (TCQ) to achieve ultra-high-dimensional quantization. TCQ uses a stateful decoder that separates the codebook size from the bitrate and effective dimension. QTIP introduces a spectrum of lookup-only to computed lookup-free trellis codes designed for a hardware-efficient ``bitshift'' trellis structure; these codes achieve state-of-the-art results in both quantization quality and inference speed.

Live content is unavailable. Log in and register to view live content