AIResearch AIResearch
Back to articles
Quantum Computing

New Algorithm Builds Perfect Quantum Arrays Despite Atom Loss

A two-step method enables efficient assembly of defect-free neutral-atom arrays for quantum computing, achieving over 99% fill rates even when atoms are lost during transport.

AI Research
March 26, 2026
4 min read
New Algorithm Builds Perfect Quantum Arrays Despite Atom Loss

Quantum computers hold immense potential for fields like medicine and computing, but their hardware remains a major hurdle. Among the most promising approaches are neutral-atom quantum computers, which use individually trapped atoms as qubits arranged in optical lattices. However, a critical has been creating defect-free arrays of these atoms, as missing qubits lead to errors that undermine computations. A new algorithm called ATLAS addresses this by efficiently rearranging randomly loaded atoms into perfect configurations, even when some atoms are lost during the process, offering a practical path toward more reliable and scalable quantum systems.

The researchers developed ATLAS to convert a randomly loaded square lattice of atoms into a defect-free subarray, accounting for realistic physical constraints like acceleration limits and atom loss. In simulations, ATLAS consistently achieved fill rates above 99% within six iterations, meaning nearly all target sites were occupied. For example, with a 100x100 lattice and an atom occupation probability of 0.7, the algorithm reached perfect fill rates immediately in the first iteration when no loss occurred, as shown in Figure 3a. Even with a loss probability of 0.05, the mean fill rate remained high at 0.998, demonstrating robustness against stochastic failures. This performance is crucial because defect-free arrays are essential for high-fidelity quantum gates and error correction, enabling more complex quantum algorithms.

ATLAS operates in two main phases: planning and execution. In the planning phase, the algorithm computes optimal batches of parallel moves on a virtual copy of the lattice, assuming no atom loss. This phase involves seven subroutines, such as row-wise and column-wise centering, spread-and-squeeze cycles, and corner-block moves, all designed to maximize simultaneous atom transports while respecting hardware constraints like maximum acceleration of 2750 m/s² and trap transfer times of 60 microseconds. The execution phase then replays these moves under realistic conditions, applying probabilistic loss to each atom transfer. For instance, each move has a probability p_loss of losing the atom, modeled with Bernoulli trials, and the algorithm filters out moves where source atoms have already been lost, ensuring efficient use of remaining atoms.

From Monte Carlo simulations across various lattice sizes, loading probabilities, and loss rates highlight ATLAS's efficiency and scalability. As seen in Figure 4, retention rates—the proportion of initially loaded atoms retained in the final array—were generally above 80% even with a loss probability of 0.05, outperforming prior s like PSCA, which achieved a retention rate of about 0.75. Move scaling was sublinear, approximately proportional to M^0.55 where M is the total number of trap sites, indicating that the number of moves grows slower than the array size. Additionally, the required initial lattice size scaled linearly with the target dimension, even under loss, as shown in Figure 6b, reducing hardware overhead and making larger arrays feasible. A parallelization refinement further improved move scaling to an exponent of 0.472, matching leading multitweezer s while maintaining higher retention.

Of this work are significant for advancing neutral-atom quantum computing. By enabling reliable assembly of defect-free arrays, ATLAS could accelerate the development of scalable quantum processors that leverage the advantages of neutral atoms, such as longer coherence times and all-to-all connectivity. This algorithm not only improves atom utilization but also reduces demands on laser power and control channels, making experimental setups more practical. For everyday readers, this means progress toward quantum computers that can solve complex problems in fields like drug and optimization, though practical applications remain in the future as hardware continues to evolve.

Despite its strengths, ATLAS has limitations noted in the paper. The algorithm's performance is less efficient for very small lattice sizes, such as 10x10, where safety margins in target zone initialization have a bigger impact, leading to lower retention rates. Simulations were capped at six iterations to reduce computation time, which occasionally left a few defects unaddressed under high loss conditions; without this cap, fill rates are expected to reach perfection. Additionally, the computational time scales cubically with lattice width, dominating total execution time on standard hardware, though on high-performance platforms, physical movement time would be the primary factor. These constraints highlight areas for future optimization, particularly in handling smaller arrays and further reducing iteration needs.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn