Timezone: »
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding ``Let's think step by step'' before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large-scale InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
Author Information
Takeshi Kojima (The University of Tokyo)
Shixiang (Shane) Gu (Google Brain)
Machel Reid (Google Research)
Yutaka Matsuo (University of Tokyo)
Yusuke Iwasawa (The University of Tokyo)
More from the Same Authors
-
2021 Spotlight: Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization »
Yusuke Iwasawa · Yutaka Matsuo -
2021 : Distributional Decision Transformer for Offline Hindsight Information Matching »
Hiroki Furuta · Yutaka Matsuo · Shixiang (Shane) Gu -
2022 : What Makes Certain Pre-Trained Visual Representations Better for Robotic Learning? »
Kyle Hsu · Tyler Lum · Ruohan Gao · Shixiang (Shane) Gu · Jiajun Wu · Chelsea Finn -
2022 : Control Graph as Unified IO for Morphology-Task Generalization »
Hiroki Furuta · Yusuke Iwasawa · Yutaka Matsuo · Shixiang (Shane) Gu -
2022 : Control Graph as Unified IO for Morphology-Task Generalization »
Hiroki Furuta · Yusuke Iwasawa · Yutaka Matsuo · Shixiang (Shane) Gu -
2022 : What Makes Certain Pre-Trained Visual Representations Better for Robotic Learning? »
Kyle Hsu · Tyler Lum · Ruohan Gao · Shixiang (Shane) Gu · Jiajun Wu · Chelsea Finn -
2023 Poster: For SALE: State-Action Representation Learning for Deep Reinforcement Learning »
Scott Fujimoto · Wei-Di Chang · Edward Smith · Shixiang (Shane) Gu · Doina Precup · David Meger -
2023 Poster: DreamSparse: Escaping from Plato’s Cave with 2D Diffusion Model Given Sparse Views »
Paul Yoo · Jiaxian Guo · Yutaka Matsuo · Shixiang (Shane) Gu -
2022 Workshop: Foundation Models for Decision Making »
Mengjiao (Sherry) Yang · Yilun Du · Jack Parker-Holder · Siddharth Karamcheti · Igor Mordatch · Shixiang (Shane) Gu · Ofir Nachum -
2022 Poster: Langevin Autoencoders for Learning Deep Latent Variable Models »
Shohei Taniguchi · Yusuke Iwasawa · Wataru Kumagai · Yutaka Matsuo -
2022 Poster: Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters »
Kamyar Ghasemipour · Shixiang (Shane) Gu · Ofir Nachum -
2021 Poster: Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning »
Hiroki Furuta · Tadashi Kozuno · Tatsuya Matsushima · Yutaka Matsuo · Shixiang (Shane) Gu -
2021 Poster: Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization »
Yusuke Iwasawa · Yutaka Matsuo