Timezone: »
The Transformer architecture is ubiquitously used as the building block of largescale autoregressive language models. However, finding architectures with the optimal trade-off between task performance (perplexity) and hardware constraints like peak memory utilization and latency is non-trivial. This is exacerbated by the proliferation of various hardware. We leverage the somewhat surprising empirical observation that the number of decoder parameters in autoregressive Transformers has a high rank correlation with task performance, irrespective of the architecture topology. This observation organically induces a simple Neural Architecture Search (NAS) algorithm that uses decoder parameters as a proxy for perplexity without need for any model training. The search phase of our training-free algorithm, dubbed Lightweight Transformer Search (LTS), can be run directly on target devices since it does not require GPUs. Using on-target device measurements, LTS extracts the Pareto-frontier of perplexity versus any hardware performance cost. We evaluate LTS on diverse devices from ARM CPUs to NVIDIA GPUs and two popular autoregressive Transformer backbones: GPT-2 and Transformer-XL. Results show that the perplexity of 16-layer GPT-2 and Transformer-XL can be achieved with up to 1.5×, 2.5× faster runtime and 1.2×, 2.0× lower peak memory utilization. When evaluated in zero and one-shot settings, LTS Pareto-frontier models achieve higher average accuracy compared to the 350M parameter OPT across 14 tasks, with up to 1.6× lower latency. LTS extracts the Pareto-frontier in under 3 hours while running on a commodity laptop. We effectively remove the carbon footprint of hundreds of GPU hours of training during search, offering a strong simple baseline for future NAS methods in autoregressive language modeling.
Author Information
Mojan Javaheripi (University of California San Diego)
I am a PhD student at UC San Diego working under supervision of Prof. Farinaz Koushanfar. My research lies at the intersection of machine learning algorithm and systems. I tackle challenges to enable hardware-aware and secure Deep Learning (DL). I have worked in the areas of efficient DL training and execution on constrained devices as well as adversarially robust DL models. I am the recipient of the 2019 Qualcomm Innovation Fellowship award. Prior to my PhD, I obtained my Bachelor's in Electrical Engineering majoring in digital system design. Skills: Deep Learning, AutoML, Computer Vision, Discrete and Continuous Optimization, Computer Architecture
Gustavo de Rosa (Microsoft Research)
Subhabrata Mukherjee (Microsoft)
Shital Shah (Microsoft)
Tomasz Religa (University of Cambridge)
Caio Cesar Teodoro Mendes (Microsoft)
Sebastien Bubeck (Microsoft Research)
Farinaz Koushanfar (William Marsh Rice University)
Debadeepta Dey (Microsoft Research)
I am a researcher in the Adaptive Systems and Interaction (ASI) group led by Dr. Eric Horvitz at Microsoft Research, Redmond, USA. I finished my PhD at the Robotics Institute, Carnegie Mellon University, USA, where I was advised by Prof. J. Andrew (Drew) Bagnell. I do fundamental as well as applied research in machine learning, control and computer vision with applications to autonomous agents in general and robotics in particular. My interests include decison-making under uncertainty, reinforcement learning, artificial intelligence and machine learning. As of January 2019 I am also serving as Affiliate Assistant Professor at The School of Computer Science and Engineering, University of Washington, Seattle, USA. I regularly review for NeurIPS, ICLR, ICML, ICRA, IROS, IJRR, JFR. On occasion for CVPR, ECCV, ICCV and Autonomous Robots.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models »
Tue. Nov 29th through Wed the 30th Room Hall J #640
More from the Same Authors
-
2021 Spotlight: A single gradient step finds adversarial examples on random two-layers neural networks »
Sebastien Bubeck · Yeshwanth Cherapanamjeri · Gauthier Gidel · Remi Tachet des Combes -
2022 : FL-Talk: Covert Communication in Federated Learning via Spectral Steganography »
Huili Chen · Farinaz Koushanfar -
2022 : zPROBE: Zero Peek Robustness Checks for Federated Learning »
Zahra Ghodsi · Mojan Javaheripi · Nojan Sheybani · Xinqiao Zhang · Ke Huang · Farinaz Koushanfar -
2023 Poster: Learning threshold neurons via edge of stability »
Kwangjun Ahn · Sebastien Bubeck · Sinho Chewi · Yin Tat Lee · Felipe Suarez · Yi Zhang -
2022 : Contributed Talk: zPROBE: Zero Peek Robustness Checks for Federated Learning »
Zahra Ghodsi · Mojan Javaheripi · Nojan Sheybani · Xinqiao Zhang · Ke Huang · Farinaz Koushanfar -
2022 Spotlight: Lightning Talks 5B-2 »
Conglong Li · Mohammad Azizmalayeri · Mojan Javaheripi · Pratik Vaishnavi · Jon Hasselgren · Hao Lu · Kevin Eykholt · Arshia Soltani Moakhar · Wenze Liu · Gustavo de Rosa · Nikolai Hofmann · Minjia Zhang · Zixuan Ye · Jacob Munkberg · Amir Rahmati · Arman Zarei · Subhabrata Mukherjee · Yuxiong He · Shital Shah · Reihaneh Zohrabi · Hongtao Fu · Tomasz Religa · Yuliang Liu · Mohammad Manzuri · Mohammad Hossein Rohban · Zhiguo Cao · Caio Cesar Teodoro Mendes · Sebastien Bubeck · Farinaz Koushanfar · Debadeepta Dey -
2022 Poster: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models »
Dongkuan (DK) Xu · Subhabrata Mukherjee · Xiaodong Liu · Debadeepta Dey · Wenhui Wang · Xiang Zhang · Ahmed Awadallah · Jianfeng Gao -
2021 Poster: Adversarial Examples in Multi-Layer Random ReLU Networks »
Peter Bartlett · Sebastien Bubeck · Yeshwanth Cherapanamjeri -
2021 Poster: A single gradient step finds adversarial examples on random two-layers neural networks »
Sebastien Bubeck · Yeshwanth Cherapanamjeri · Gauthier Gidel · Remi Tachet des Combes -
2021 Poster: A Universal Law of Robustness via Isoperimetry »
Sebastien Bubeck · Mark Sellke -
2021 Oral: A Universal Law of Robustness via Isoperimetry »
Sebastien Bubeck · Mark Sellke -
2020 Poster: Network size and size of the weights in memorization with two-layers neural networks »
Sebastien Bubeck · Ronen Eldan · Yin Tat Lee · Dan Mikulincer -
2020 Poster: Safe Reinforcement Learning via Curriculum Induction »
Matteo Turchetta · Andrey Kolobov · Shital Shah · Andreas Krause · Alekh Agarwal -
2020 Spotlight: Safe Reinforcement Learning via Curriculum Induction »
Matteo Turchetta · Andrey Kolobov · Shital Shah · Andreas Krause · Alekh Agarwal -
2019 Poster: Efficient Forward Architecture Search »
Hanzhang Hu · John Langford · Rich Caruana · Saurajit Mukherjee · Eric Horvitz · Debadeepta Dey -
2019 Poster: Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers »
Hadi Salman · Jerry Li · Ilya Razenshteyn · Pengchuan Zhang · Huan Zhang · Sebastien Bubeck · Greg Yang -
2019 Spotlight: Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers »
Hadi Salman · Jerry Li · Ilya Razenshteyn · Pengchuan Zhang · Huan Zhang · Sebastien Bubeck · Greg Yang -
2019 Poster: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2019 Spotlight: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2018 Poster: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee -
2018 Oral: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee -
2018 Poster: Is Q-Learning Provably Efficient? »
Chi Jin · Zeyuan Allen-Zhu · Sebastien Bubeck · Michael Jordan