Jump to Content

Universal Self-Consistency with Large Language Models

Published
View publication

Abstract

Self-consistency with chain-of-thought prompting (CoT) has demonstrated remarkable performance gain on various reasoning tasks, by utilizing multiple reasoning paths sampled by the model. However, self-consistency relies on the answer extraction process to aggregate multiple solutions, which is not applicable to free-form answers. In this work, we propose Universal Self-Consistency (USC), which leverages the large language model (LLM) to select the most consistent solution among multiple candidates. We evaluate USC on a variety of benchmarks, including mathematical reasoning, long-context summarization, and open-ended question answering. On mathematical reasoning benchmarks including GSM8K and MATH, USC matches the standard self-consistency performance without requiring the answer formats to be similar. Meanwhile, USC consistently improves the performance over greedy decoding on open-ended generation tasks.

Authors

Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, Denny Zhou

Venue

arXiv