AIResearch AIResearch
Back to articles
Science

Indefinite Lattices Yield Surprisingly Short Vectors

A new algorithm finds much shorter vectors in indefinite lattices than previously thought possible, with implications for cryptography and number theory.

AI Research
March 26, 2026
4 min read
Indefinite Lattices Yield Surprisingly Short Vectors

Lattice reduction, a fundamental tool in computer science since the 1980s, has long been constrained to positive definite forms, where all vectors have non-negative squared lengths. But what happens when this constraint is lifted, allowing vectors to have negative squared lengths? A new study reveals that indefinite lattices—those with mixed positive and negative squared lengths—are not just a theoretical curiosity but can be reduced to find surprisingly short vectors, potentially outperforming classical s. This breakthrough s decades of assumptions and opens new avenues in fields like cryptography and computational number theory.

The researchers developed a novel algorithm that generalizes the famous LLL algorithm to indefinite lattices, where the scalar product is replaced by an arbitrary quadratic form that can be indefinite. Unlike previous approaches by Ivanyos and Szántó or Simon, which approximated the shortest vector with an exponential factor in the dimension, this new shows that the approximation factor depends on the signature of the lattice—the absolute difference between the number of positive and negative eigenvalues—rather than just the dimension. In practice, this means that for lattices with a small signature, the algorithm can find vectors that are exponentially shorter than those found by earlier s, as demonstrated in experiments where vectors with squared-norms of ±1 were discovered in dimension 10, far below the tenth-root of the determinant.

Ology builds on adapting key components of the LLL algorithm to the indefinite case. First, the researchers introduced a generalized Gram-Schmidt orthogonalization that handles isotropic vectors—those with zero squared-norm—without causing division by zero, by incorporating hyperbolic planes where two consecutive vectors are isotropic but not orthogonal. Second, they replaced the Lovász condition for 2×2 blocks with a reduction based on indefinite binary quadratic forms, allowing for sign alternance in the Gram-Schmidt orthogonalized vectors. This sign alternance is crucial, as it minimizes the number of definite blocks, which in turn improves the approximation factor. The algorithm also includes strategies to avoid isotropic vectors unless they form hyperbolic planes and to swap hyperbolic planes to optimize the basis.

From the paper show significant improvements over prior work. In tests with random 10×10 symmetric matrices, the new algorithm found vectors with squared-norms of ±1, whereas Simon's algorithm returned a first vector with squared-norm 9. For worst-case examples assembled from definite blocks, the algorithm maintained reduced forms without modification, but when perturbed by random unimodular matrices, it outperformed Simon's by finding shorter vectors. In a lattice with 9 positive and 1 negative eigenvalue (signature 8), the algorithm discovered a vector with squared-norm -4, while Simon's missed it, highlighting the advantage of leveraging signature-dependent bounds. The paper reports that the approximation factor can be as low as involving the signature squared, rather than the dimension squared, leading to exponentially better in practice.

Of this research are profound for applications in cryptography and number theory. In cryptography, lattice-based schemes often rely on the hardness of finding short vectors; indefinite lattices might offer new security assumptions or attack vectors. For number theory, the algorithm could improve computations involving quadratic forms or class groups, as hinted by Simon's earlier work. The ability to find shorter vectors in indefinite lattices also suggests that these structures are inherently easier to reduce than definite ones, possibly due to the infinite number of good bases in indefinite cases, as noted in the paper's conclusion. This could lead to revised complexity estimates and new algorithmic strategies in computational mathematics.

However, the study has limitations. The algorithm's worst-case performance still matches the upper bound from prior work for certain constructed examples, indicating that not all indefinite lattices yield dramatic improvements. The heuristic expectation suggests that for random inputs, the approximation factor depends on the signature squared, but this is not proven in all cases. Additionally, the current implementation in Magma is suboptimal, using rational arithmetic and recomputing Gram-Schmidt vectors after each modification, which affects speed. Future work could optimize this with floating-point techniques or update strategies. The paper also leaves open questions about extending the algorithm to solve related problems like the closest vector problem or applying it to Hermitian lattices, which can be transformed into indefinite lattices via dimension doubling.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn