Skip to yearly menu bar Skip to main content


Poster

Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference

Rohan Baskar Prabhakar · Hengrui Zhang · David Wentzlaff

East Exhibit Hall A-C #2901
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Large Transformer networks are increasingly used in settings where low inference latency is necessary to enable new applications and improve the end-user experience.However, autoregressive inference is resource intensive and requires parallelism for efficiency.Parallelism introduces collective communication that is both expensive and represents a phase when hardware resources are underutilized.Towards mitigating this, Kraken is an evolution of the standard Transformer architecture that is designed to complement existing tensor parallelism schemes for efficient inference on multi-device systems.By introducing a fixed degree of intra-layer model parallelism, the architecture allows collective operations to be overlapped with compute, decreasing latency and increasing hardware utilization.When trained on OpenWebText, Kraken models reach a similar perplexity as standard Transformers while also preserving their language modeling capabilities as evaluated on the SuperGLUE benchmark.Importantly, when tested on multi-GPU systems using TensorRT-LLM engines, Kraken speeds up Time To First Token by a mean of 35.6% across a range of model sizes, context lengths, and degrees of tensor parallelism.

Live content is unavailable. Log in and register to view live content