Abstract
A principal goal of computational neuroscience is to discover mathematical models that describe how the brain implements cognitive processes. Computational neuroscientists typically face a difficult choice between symbolic model-driven approaches that are interpretable, but underperform at predicting animal behavior, versus data-driven approaches that train overparameterized models to make accurate behavioral predictions with often hard-to-interpret, blackbox models. In this work, we explore the use of an automated LLM-based program discovery tool (FunSearch, \cite{romeraparedes2024mathematical}) to discover novel symbolic programs that accurately describe animal behavior without sacrificing interpretability. We dub this CogFunSearch, and verify its efficacy on behavioral data from rodent reward-guided choice behaviors \cite{miller2021predictive}. We find that CogFunSearch is able to reliably discover programs that outperform the state-of-the-art cognitive model for predicting rat choices in a 2-armed drifting bandit task. This is the case both when CogFunSearch is used to improve on inputted programs, as well as when it is used to discover programs from scratch. Moreover, we find that CogFunSearch is sensitive to semantic information in the prompt: prompts that provide details of the dataset and suggest an RL modeling framework lead to quantitatively better programs than uninformative prompts. Both the classic models and the discovered programs are significantly outperformed by a simple, yet unconstrained, neural network, implying that there remains substantial room for improvement in the predictive performance of the discovered functions. Broadly, these results provide early insights into the use of LLM-based program discovery tools to find models of cognition.
Authors
Pablo Castro Rivadeneira, Kim Stachenfeld, Kevin Miller, Nenad Tomašev, Navodita Sharma, Ankit Anand, Alexander Novikov, Kuba Perlin, Nathaniel Daw, Will Dabney, noemielteto , Siddhant Jain, Kyle Levin
Venue
Biorxiv