AuPair: Golden Example Pairs for Code Repair

Published
View publication Download

Abstract

Scaling up inference-time compute has proven to be a valuable strategy in improving the performance of Large Language Models (LLMs) on several tasks without involving any additional fine-tuning. An example of such a task that can benefit from additional inference-time compute is self-repair: given an initial flawed response produced by the LLM, it is supposed to correct its own mistake and produce an improved response. We propose leveraging the in-context learning capability exhibited by LLMs to aid with self-repair. The key contribution of this paper is an approach to synthesise and select a \textcolor{golden}{golden} set of \emph{pairs}, each of which contains a problem, the initial \emph{guess} produced by the LLM, and the consequent \emph{fix} generated. Each \textcolor{golden}{golden} example pair, or \textcolor{golden}{Au}Pair is then provided as an in-context example at inference time to generate a candidate repaired solution with 1-shot prompting; in line with best-of-$N$ the highest scoring response is selected. Given an inference-time compute budget of $N$ LLM calls, our algorithm selects $N$ \textcolor{golden}{Au}Pairs in a manner that maximises complementarity and usefulness. We demonstrate the results of our algorithm on the coding domain for the code repair task on 4 LLMs across 7 competitive programming datasets. The \textcolor{golden}{Au}Pairs produced by our approach provide a significant boost in performance compared to best-of-$N$, and also exhibit strong generalisation across datasets and models. Moreover, our approach shows strong performance as the inference-time compute budget $N$ is scaled up.

Authors

Aditi Mavalankar, Hassan Mansoor, Zita Marinho, Masha Samsikova, Tom Schaul

Venue

ICML 2025