April 22, 2026

Image Generators are Generalist Vision Learners

Abstract

Recent works show that image and video generators exhibit zero-shot visual understanding behaviors, in a way reminiscent of how Large Language Models (LLMs) such as Gemini and GPT develop emergent capabilities of language understanding and reasoning from generative pretraining. While it has long been conjectured that the ability to create visual content implies an ability to understand it, there has been limited evidence that generative vision models have developed a strong understanding capability. In this work, we demonstrate that image generation training serves a similar role to LLM pretraining, and lets models learn powerful and general visual representations that enable state-of-the-art performance on various vision tasks. We present Vision Banana, a model built upon Nano Banana Pro (NBP) by instruction-tuning on a mixture of its original training data alongside a small amount of vision task data. We parameterize the output space of vision tasks as RGB images and thus reframe vision tasks as image generation, which allows us to more effectively leverage the generation capability in the base model. Our generalist model, Vision Banana, achieves state-of-the-art results on a variety of vision tasks involving both 2D and 3D understanding, beating or rivaling domain-specific specialists, including the Segment Anything series on various segmentation tasks, and the Depth Anything series in metric depth estimation. We show that these results can be achieved with lightweight instruction-tuning without sacrificing the base model's image generation capabilities. The superior results suggest that image generation pretraining is a generalist vision learner. It also shows that image generation serves as a unified and universal interface for vision tasks, similar to text generation's role in language understanding and reasoning. We believe that we are witnessing a major paradigm shift for computer vision, and it would pave the way for building Foundational Vision Models from generative vision pretraining.

Authors

Valentin Gabeur, Shangbang Long, Songyou Peng, Paul Voigtlaender, Shuyang Sun, Yanan Bao, Karen Truong, Zhicheng Wang, Wenlei Zhou, Jonathan T. Barron, Kyle Genova, Nithish Kannen, Sherry Ben, Yandong Li, Mandy Guo, Suhas Yogin, Yiming Gu, Huizhong Chen, Oliver Wang, Saining Xie, Howard Zhou, Kaiming He, Thomas Funkhouser, Jean-Baptiste Alayrac, Radu Soricut

Venue

arXiv