Quantum computers, which promise to solve problems beyond the reach of classical machines, face a major hurdle: errors caused by data leaking out of their computational units. Researchers have now experimentally demonstrated a technique that significantly reduces this leakage, using IBM's cloud quantum computer. This advancement could make quantum algorithms more reliable, bringing practical quantum computing closer to reality.
The key finding is that Leakage Elimination Operators (LEOs) can suppress errors by preventing quantum information from escaping designated subspaces. In tests on two- and three-qubit systems, the researchers applied LEOs and observed that the fidelity—a measure of how well the quantum state is preserved—remained stable over time, unlike in uncontrolled systems where fidelity dropped due to leakage.
Ology involved using IBM's five-qubit quantum computer to implement three types of LEOs. These operators were applied as fast, intense pulses, similar to rapid on-off switches, to counteract the effects of decoherence—the process where quantum systems lose information to their environment. For example, in one test, Z gates were applied to all qubits to protect specific subspaces, while in another, CNOT gates served as LEOs. The experiments initialized quantum states in leakage-free subspaces and compared with and without LEO application over up to 600 pulses, with each run involving 1024 measurements.
Analysis, based on figures from the paper, showed that LEOs maintained fidelity at approximately 0.8 for the two-qubit system using Z gates and around 0.6 for the three-qubit case, as seen in Figure 4. In contrast, free evolution without LEOs led to a decline in fidelity, with populations shifting to lower-energy states like the ground state, indicating leakage. For the CNOT-based LEO, fidelity stayed near ideal levels, but its effectiveness decreased when the time between pulses increased, as illustrated in Figure 5, where longer intervals made the LEO counterproductive.
This work matters because leakage errors undermine the error protection benefits of quantum encoding, and LEOs are compatible with universal quantum computing, meaning they can be used alongside any algorithm. By reducing errors, this approach could improve the accuracy of quantum computations in fields like cryptography or material science, without requiring major hardware changes.
Limitations noted in the paper include the constraints of IBM's quantum computer, which prevented ideal implementation of the pulse sequences. The effectiveness of LEOs depends on short time intervals between pulses, and in real-world setups, factors like pulse strength and duration may vary, potentially reducing performance. Further research is needed to optimize these parameters for larger-scale systems.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn