Skip to Content

DeepMind's AI Nearly Clinched Gold at the International Mathematical Olympiad

AI Competes with the Best: A Milestone in Mathematical Reasoning

Artificial intelligence has taken a remarkable leap by matching the prowess of elite young mathematicians. DeepMind’s AlphaProof and AlphaGeometry 2 models recently performed at a silver-medal standard in the International Mathematical Olympiad (IMO), showcasing breakthroughs in advanced mathematical reasoning and problem-solving.

The IMO: A True Test of Intelligence

The IMO stands as the ultimate challenge for high school mathematicians, featuring six grueling problems across algebra, combinatorics, geometry, and number theory. Traditionally, only the most dedicated students excel after years of preparation. Now, the IMO has become a new proving ground for AI, setting the bar for what machines can achieve in abstract reasoning.

This year, DeepMind’s AI systems tackled the official IMO 2024 problems, solving four out of six and earning 28 out of 42 points. This score puts them at the top of the silver-medal range with just one point shy of gold, a feat that highlights how close AI is to matching the world’s best young minds.

Inside AlphaProof: Formalizing Mathematical Ingenuity

AlphaProof represents a breakthrough in formal mathematical reasoning. Built on the Lean language, it merges a pre-trained language model with AlphaZero’s reinforcement learning, empowering it to both generate and verify proofs with precision. Unlike conventional models that may produce plausible but flawed answers, AlphaProof ensures mathematical rigor.

  • AlphaProof trained on millions of problems before and during the competition, spanning a range of difficulties.

  • During IMO 2024, it solved two algebra problems and one number theory problem, including the event’s most challenging question, one cracked by only five human participants.

  • The model’s learning mechanism strengthens with every proof, enabling progress toward ever tougher challenges.

AlphaGeometry 2: A Leap in Geometric Reasoning

AlphaGeometry 2 takes geometry problem-solving to new heights. Leveraging a neuro-symbolic hybrid framework and powered by DeepMind’s Gemini, it was trained on ten times the synthetic data compared to its predecessor. Its advanced symbolic engine and collaborative knowledge features enable it to solve problems faster and more reliably.

  • AlphaGeometry 2 solved 83% of historical IMO geometry problems prior to the latest competition, up from 53% with the original model.

  • It tackled the 2024 IMO geometry problem in just 19 seconds after formalization, highlighting its speed and creative construction skills.

Natural Language Reasoning: Expanding AI’s Versatility

DeepMind is also exploring natural language reasoning with a Gemini-based system that can approach IMO problems without the need for formal translation. While this technology is in its early days, its promising results point toward a future where AI can support mathematicians in both formal and informal contexts, enhancing collaboration and discovery.

Implications: A New Era for AI and Mathematical Discovery

These advances mark a turning point for AI in scientific research. By performing at a silver-medal level at the IMO, DeepMind’s models demonstrate a capacity for creative and abstract reasoning that rivals top human competitors. The potential applications reach far beyond competitions, tools like AlphaProof and AlphaGeometry 2 could revolutionize mathematical discovery, aid in complex proof verification, and inspire innovative research strategies.

As these AI systems evolve, the synergy between mathematicians and machines promises to unlock new insights and accelerate progress across mathematics and related fields.

Source: Google DeepMind Blog

DeepMind's AI Nearly Clinched Gold at the International Mathematical Olympiad
Joshua Berkowitz September 30, 2025
Views 902
Share this post