Research
Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad
The International Mathematical Olympiad (“IMO”) is the world’s most prestigious competition for young mathematicians, and has been held annually since 1959. Each country taking part is represented by six elite, pre-university mathematicians who compete to solve six exceptionally difficult problems in algebra, combinatorics, geometry, and number theory. Medals are awarded to the top half of contestants, with approximately 8% receiving a prestigious gold medal.
Recently, the IMO has also become an aspirational challenge for AI systems as a test of their advanced mathematical problem-solving and reasoning capabilities. Last year, Google DeepMind’s combined AlphaProof and AlphaGeometry 2 systems achieved the silver-medal standard, solving four out of the six problems and scoring 28 points. Making use of specialist formal languages, this breakthrough demonstrated that AI was beginning to approach elite human mathematical reasoning.
This year, we were amongst an inaugural cohort to have our model results officially graded and certified by IMO coordinators using the same criteria as for student solutions. Recognizing the significant accomplishments of this year’s student-participants, we’re now excited to share the news of Gemini’s breakthrough performance.
Breakthrough Performance at IMO 2025 with Gemini Deep Think
An advanced version of Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points, and achieving gold-medal level performance. The solutions can be found online here.
"We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score. Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow."
IMO President Prof. Dr. Gregor Dolinar
This achievement is a significant advance over last year’s breakthrough result. At IMO 2024, AlphaGeometry and AlphaProof required experts to first translate problems from natural language into domain-specific languages, such as Lean, and vice-versa for the proofs. It also took two to three days of computation. This year, our advanced Gemini model operated end-to-end in natural language, producing rigorous mathematical proofs directly from the official problem descriptions – all within the 4.5-hour competition time limit.
Making the most of Deep Think mode
We achieved this year’s result using an advanced version of Gemini Deep Think – an enhanced reasoning mode for complex problems that incorporates some of our latest research techniques, including parallel thinking. This setup enables the model to simultaneously explore and combine multiple possible solutions before giving a final answer, rather than pursuing a single, linear chain of thought.
To make the most of the reasoning capabilities of Deep Think, we additionally trained this version of Gemini on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving and theorem-proving data. We also provided Gemini with access to a curated corpus of high-quality solutions to mathematics problems, and added some general hints and tips on how to approach IMO problems to its instructions.
We will be making a version of this Deep Think model available to a set of trusted testers, including mathematicians, before rolling it out to Google AI Ultra subscribers.
The Future of AI and Mathematics
Google DeepMind has ongoing collaborations with the mathematical community, but we are still only at the start of AI’s potential to contribute to mathematics. By teaching our systems to reason more flexibly and intuitively, we are getting closer to building AI that can solve more complex and advanced mathematics.
While our approach this year was based purely on natural language with Gemini, we also continue making progress on our formal systems, AlphaGeometry and AlphaProof. We believe agents that combine natural language fluency with rigorous reasoning - including verified reasoning in formal languages - will become invaluable tools for mathematicians, scientists, engineers, and researchers, helping us advance human knowledge on the path to AGI.
Acknowledgements
We thank the International Mathematical Olympiad organization for their support.
Thang Luong led the overall technical direction of the advanced Gemini model with Deep Think for IMO and co-led with Edward Lockhart on the overall coordination of the IMO 2025 effort.
The IMO 2025 system would not have been possible without the following technical leads. Dawsen Hwang, Junehyuk Jung co-led training data and expert evaluation. Jonathan Lee, Nate Kushman, Pol Moreno, Yi Tay co-led the training of the advanced Gemini Deep Think model while Lei Yu led model evaluation. Golnaz Ghiazi, Garrett Bingham, Lalit Jain co-led Deep Think inference while Dawsen Hwang, Vincent Cohen-Addad co-led an enhanced inference approach.
The IMO 2025 system was also developed with key contributions from Theophane Weber, Ankesh Anand for modeling; Vinay Ramasesh, Andreas Kirsch, Jieming Mao, Zicheng Xu, Wilfried Bounsi, Vahab Mirrokni for inference; Hoang Nguyen, Fred Zhang, Mahan Malihi, Yangsibo Huang for training data.
We thank contributions from related teams and efforts. AlphaGeometry team with Yuri Chervonyi (lead), Trieu Trinh, Hoang Nguyen, Junsu Kim, Mirek Olšák, Marcelo Menegali, Xiaomeng Yang. Miklós Z. Horváth, Aja Huang, Goran Žužić for formal mathematics. We thank Fabian Pedregosa, Richard Song, Alex Zhai, Sara Javanmardi, YaGuang Li, Filipe Miguel de Almeida, Silvio Lattanzi, Ashkan Norouzi Fard, Tal Schuster, Honglu Fan, Xuezhi Wang, Aditi Mavalankar, Tom Schaul, Rosemary Ke for support and collaboration.
We especially thank other core members of the Deep Think team (Archit Sharma, Tong He, Shubha Raghvendra), the post-training effort (Tianhe Kevin Yu, Siamak Shakeri, Hanzhao Lin, Cosmo Du, Sid Lall), and Thinking Area research that the IMO 2025 system were built on.
This effort was advised by Quoc Le and Pushmeet Kohli, with program support from Kristen Chiafullo and Alex Goldin.
We’d also like to thank our experts for providing data and evaluations: Insuk Seo (lead), Jiwon Kang, Donghyun Kim, Junsu Kim, Jimin Kim, Seongbin Jeon, Yoonho Na, Seunghwan Lee, Jihoo Lee, Younghun Jo, Yongsuk Hur, Seongjae Park, Kyuhyeon Choi, Minkyu Choi, Su-Hyeok Moon, Seojin Kim, Yueun Lee, Taehun Kim, Jeeho Ryu, Seungwoo Lee, Dain Kim, Sanha Lee, Hyunwoo Choi, Aiden Jung, Youngbeom Jin, Jeonghyun Ahn, Junhwi Bae, Gyumin Kim, Nam Dung Tran, Cheng-Chiang Tsai, Kari Ragnarsson, Kiat Chuan Tan, Yahya Tabesh, Hamed Mahdavi, Azin Nazari, Xiangzhuo Ding, Chu-Lan Kao, Steven Creech, Tony Feng, Ciprian Manolescu.
And thanks to our serving and deployment experts: Emanuel Taropa, Charlie Chen, Joe Stanton, Cip Baetu, Alvin Abdagic, Federico Lebron, Ioana Mihailescu, Soheil Hassas Yeganeh, and Minh Gang.
Further thanks to Jessica Lo and Sajjad Zafar for their support for compute provision and management; Jane Labanowski, Andy Forbes, Sean Nakamoto for legal and logistics; and Omer Levy, Timothy Lillicrap, Jack Rae, Yifeng Lu, Heng-tze Cheng, Ed Chi, Vahab Mirrokni, Tulsee Doshi, Madhavi Sewak, Melvin Johnson, Koray Kavukcuoglu, Oriol Vinyals, Jeff Dean, Demis Hassabis, and Sergey Brin for their support and advice.
Finally, we thank Prof Gregor Dolinar from the IMO Board for the support and endorsement.
The IMO have confirmed that our submitted answers are complete and correct solutions. It is important to note that their review does not extend to validating our system, processes, or underlying model (see more).