AlphaEvolve Tackles Kissing Problem & More

AlphaEvolve Tackles Kissing Problem & More

There’s a mathematical concept called the ‘kissing number.’ Somewhat disappointingly, it’s got nothing to do with actual kissing; It enumerates how many spheres can touch (or ‘kiss’) a single sphere of equal size without crossing it. In one dimension, the kissing number is two. In two dimensions it’s 6 (think the New York Times’spelling bee puzzle configuration). As the number of dimensions grows, the answer becomes less obvious: For most dimensionalities over 4, only upper and lower bounds on the kissing number are known. Now, an AI agent developed by Google DeepMind called AlphaEvolve has made its contribution to the problem, increasing the lower bound on the kissing number in 11 dimensions from 592 to 593.

This may seem like an incremental improvement on the problem, especially given that the upper bound on the kissing number in 11 dimensions is 868, so the unknown range is still quite large. But it represents a novel mathematical discovery by an AI agent, and challenges the idea that large language models are not capable of original scientific contributions.

And this is just one example of what AlphaEvolve has accomplished. “We applied AlphaEvolve across a range of open problems in research mathematics, and we deliberately picked problems from different parts of math: analysis, combinatorics, geometry,” says Matej Balog, a research scientist at DeepMind that worked on the project. They found that for 75 percent of the problems, the AI model replicated the already known optimal solution. In 20 percent of cases, it found a new optimum that surpassed any known solution. “Every single such case is a new discovery,” Balog says. (In the other 5 percent of cases, the AI converged on a solution that was worse than the known optimal one.)

The model also developed a new algorithm for matrix multiplication—the operation that underlies much of machine learning. A previous version of DeepMind’s AI model, called AlphaTensor, had already beat the previous best known algorithm, discovered in 1969, for multiplying 4 by 4 matrices. AlphaEvolve found a more general version of that improved algorithm.

DeepMind’s AlphaEvolve made improvements to several practical problems at Google. Google DeepMind

In addition to abstract math, the team also applied their model to practical problems Google as a company faces every day. The AI was also used to optimize data center orchestration to gain 1 percent improvement, to optimize the design of the next Google tensor processing unit, and to discover an improvement to a kernel used in Gemini training leading to a 1 percent reduction in training time.

“It’s very surprising that you can do so many different things with a single system,” says Alexander Novikov, a senior research scientist at DeepMind who also worked on AlphaEvolve.

How AlphaEvolve Works

AlphaEvolve is able to be so general because it can be applied to almost any problem that can be expressed as code, and which can be checked by…

Read full article: AlphaEvolve Tackles Kissing Problem & More

The post “AlphaEvolve Tackles Kissing Problem & More” by Dina Genkina was published on 05/14/2025 by spectrum.ieee.org