Timezone: »

 
Spotlight
LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models
Mojan Javaheripi · Gustavo de Rosa · Subhabrata Mukherjee · Shital Shah · Tomasz Religa · Caio Cesar Teodoro Mendes · Sebastien Bubeck · Farinaz Koushanfar · Debadeepta Dey

Thu Dec 08 09:00 AM -- 11:00 AM (PST) @

The Transformer architecture is ubiquitously used as the building block of largescale autoregressive language models. However, finding architectures with the optimal trade-off between task performance (perplexity) and hardware constraints like peak memory utilization and latency is non-trivial. This is exacerbated by the proliferation of various hardware. We leverage the somewhat surprising empirical observation that the number of decoder parameters in autoregressive Transformers has a high rank correlation with task performance, irrespective of the architecture topology. This observation organically induces a simple Neural Architecture Search (NAS) algorithm that uses decoder parameters as a proxy for perplexity without need for any model training. The search phase of our training-free algorithm, dubbed Lightweight Transformer Search (LTS), can be run directly on target devices since it does not require GPUs. Using on-target device measurements, LTS extracts the Pareto-frontier of perplexity versus any hardware performance cost. We evaluate LTS on diverse devices from ARM CPUs to NVIDIA GPUs and two popular autoregressive Transformer backbones: GPT-2 and Transformer-XL. Results show that the perplexity of 16-layer GPT-2 and Transformer-XL can be achieved with up to 1.5×, 2.5× faster runtime and 1.2×, 2.0× lower peak memory utilization. When evaluated in zero and one-shot settings, LTS Pareto-frontier models achieve higher average accuracy compared to the 350M parameter OPT across 14 tasks, with up to 1.6× lower latency. LTS extracts the Pareto-frontier in under 3 hours while running on a commodity laptop. We effectively remove the carbon footprint of hundreds of GPU hours of training during search, offering a strong simple baseline for future NAS methods in autoregressive language modeling.

Author Information

Mojan Javaheripi (University of California San Diego)

I am a PhD student at UC San Diego working under supervision of Prof. Farinaz Koushanfar. My research lies at the intersection of machine learning algorithm and systems. I tackle challenges to enable hardware-aware and secure Deep Learning (DL). I have worked in the areas of efficient DL training and execution on constrained devices as well as adversarially robust DL models. I am the recipient of the 2019 Qualcomm Innovation Fellowship award. Prior to my PhD, I obtained my Bachelor's in Electrical Engineering majoring in digital system design. Skills: Deep Learning, AutoML, Computer Vision, Discrete and Continuous Optimization, Computer Architecture

Gustavo de Rosa (Microsoft Research)
Subhabrata Mukherjee (Microsoft)
Shital Shah (Microsoft)
Tomasz Religa (University of Cambridge)
Caio Cesar Teodoro Mendes (Microsoft)
Sebastien Bubeck (Microsoft Research)
Farinaz Koushanfar (William Marsh Rice University)
Debadeepta Dey (Microsoft Research)

I am a researcher in the Adaptive Systems and Interaction (ASI) group led by Dr. Eric Horvitz at Microsoft Research, Redmond, USA. I finished my PhD at the Robotics Institute, Carnegie Mellon University, USA, where I was advised by Prof. J. Andrew (Drew) Bagnell. I do fundamental as well as applied research in machine learning, control and computer vision with applications to autonomous agents in general and robotics in particular. ​ My interests include decison-making under uncertainty, reinforcement learning, artificial intelligence and machine learning. As of January 2019 I am also serving as Affiliate Assistant Professor at The School of Computer Science and Engineering, University of Washington, Seattle, USA. I regularly review for NeurIPS, ICLR, ICML, ICRA, IROS, IJRR, JFR. On occasion for CVPR, ECCV, ICCV and Autonomous Robots.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors