Jump to Content

Large Language Models Self-Discover Reasoning Structures

Published
View publication

Abstract

We introduce SELF-DISCOVER, a general framework for LLMs to self-discover and compose atomic reasoning modules such as critical thinking and step-by-step reasoning to tackle complex reasoning problems that are challenging for typical prompting methods e.g. Chain-of-Thought (CoT). Core to the framework is a self-discover process where LLMs select multiple atomic reasoning modules, and compose them into an explicit and task-unique reasoning structure for LLMs to follow during decoding, in sharp contrast to the implicit reasoning in CoT. SELF-DISCOVER substantially improves GPT-4 and PaLM-2’s performance on challenging reasoning benchmarks such as BigBench-Hard and Thinking4Doing, by as much as 30%. Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency and majority voting by more than 20% across 24 tasks on multiple LLMs, while requiring 10-40x fewer inference compute. Finally, the self-discovered reasoning structures by GPT-4 can also be applied to smaller models of Llama 2 to improving their reasoning capabilities, demonstrating generalization of the discovered reasoning structures.

Authors

Pei Zhou*, Jay Pujara*, Xiang Ren*, Swaroop Mishra, Steven Zheng, Denny Zhou, Heng-Tze Cheng, Quoc Le, Ed Chi, Xinyun Chen

*
External author

Venue

arXiv