Appendix B
Modalities
Introduction: The Physical vs. Theoretical Threshold
Quantum Error Correction (QEC) relies on the threshold theorem, which postulates that if the physical error rate p of quantum operations is below a certain threshold pth, arbitrarily long quantum computations can be performed by encoding information into logical qubits.
For the surface code, the theoretical threshold is relatively high, often cited at pth ≈ 10⁻² [57]. Operating precisely at this threshold represents a break-even point: the error correction process introduces errors at the exact rate it suppresses them.
However, to achieve sustainable error suppression, where the logical error rate pL is sufficiently low to run millions of gates, the physical hardware must achieve an operational requirement of p ≈ 10⁻⁴. The logical error rate scales according to the code distance d:
pL ≈ C (p / pth)⌊(d+1)/2⌋
where C is a constant related to the number of combinations of failure mechanisms. Reaching p ≈ 10⁻⁴ dramatically reduces the overhead required to construct stable logical qubits.
The Breakdown of i.i.d. Noise Models
Theoretical QEC models often assume independent and identically distributed (i.i.d.) noise. Under this assumption, the probability of simultaneous errors on two qubits, A and B, is simply P(A ∩ B) = P(A) × P(B).
In practical hardware, particularly superconducting circuits, noise is heavily correlated. A primary source of highly correlated noise is ionizing radiation, such as cosmic rays [58]. When a high-energy particle strikes the silicon substrate, it generates a burst of phonons and quasiparticles that traverse the chip. Consequently, the i.i.d. assumption fails: P(A ∩ B) ≫ P(A)P(B), fundamentally compromising the surface code’s assumption of localized, independent errors. Additionally, noise and error from control lines, crosstalk, measurement, and other mechanisms add to this.
Superconducting Circuits: Transmons and Connectivity
The Transmon Hamiltonian
The transmon qubit mitigates charge noise by operating in a regime where the Josephson energy (EJ) far exceeds the charging energy (EC). By shunting the Josephson junction with a large capacitance, the transmon becomes exponentially insensitive to charge noise ng [59].
The system is modeled as a nonlinear oscillator. The transmon Hamiltonian is given by:
Ĥ = 4EC(n̂ - ng)² - EJ cos(φ̂)
where n̂ is the Cooper pair number operator and φ̂ is the superconducting phase difference.
Anharmonicity and Leakage
Because the potential well is a cosine rather than a perfect parabola, the energy levels are not equally spaced. The anharmonicity α is defined as the difference between the first and second transition energies:
α = E₁₂ - E₀₁ ≈ -EC
In standard transmons, α / h ≈ -300 MHz. This weak anharmonicity imposes a fundamental speed limit on quantum gates to prevent spectral overlap and subsequent population leakage into the |2⟩ state. Leakage is a change to a non-target state. Recent advances in fabrication, such as scaffold-assisted window junctions, aim to further refine these parameters for better coherence [60].
Lattice Constraints: Heavy-Hex vs. Square
To combat frequency crowding and crosstalk on 2D architectures, IBM introduced the Heavy-Hex lattice [61]. By reducing the connectivity degree from four to two or three, the Heavy-Hex topology minimizes spectator errors and frequency collisions. However, this restricts the native implementation of the standard surface code, requiring specialized Quantum Low-Density Parity-Check (qLDPC) codes, such as the Bivariate Bicycle code, to optimize the physical-to-logical qubit ratio [62, 63]. There has also been research into alternative schemes that have space-time tradeoffs [64].
Google’s Willow chip demonstrated below-threshold for surface code error correction for the first time in 2024 [65]. With advances in fabrication, superconducting is developing
Neutral Atoms: Rydberg Blockade and Reconfigurable Arrays
Neutral atoms trapped in optical tweezers present a highly scalable modality capable of dynamic on-the-fly reconfiguration [66].
The Rydberg Blockade
To perform a Controlled-Z (CZ) gate, atoms are excited to a highly energetic Rydberg state (n ≫ 1). The dipole-dipole interaction potential Vrr between two atoms at distance R scales profoundly with the principal quantum number :
Vrr ∝ n¹¹ / R⁶
This interaction creates the Rydberg Blockade radius Rb. If a control atom is excited to |r⟩, the massive energy shift prevents a target atom within Rb from also being excited, conditionally mediating the phase shift required for universal entanglement [67].
High-Genus Topological Codes
Because optical tweezers can dynamically move atoms during a computation, neutral atom arrays can synthesize 3D lattices and high-genus topologies. This overcomes the planar restrictions of superconducting chips [64].
- Massive Scaling: Demonstration of a coherent 3,000-qubit system [68] and metasurfaces generating over 78,000 tweezers [69].
- Algorithmic Breakthroughs: The first logical execution of Shor’s algorithm [70] and experimental logical magic state distillation [71].
- Fault Tolerance: Low-overhead transversal fault tolerance for universal computation [72].
Trapped Ions and the QCCD Architecture
Unlike superconducting qubits, Trapped Ions in a Quantum Charge-Coupled Device (QCCD) architecture feature physical mobility [73]. Ions are shuttled between memory zones and interaction zones.
All-to-All Connectivity
Ions can be physically transported across the potential landscape, thus the architecture boasts “all-to-all” connectivity. This allows for the direct execution of non-local stabilizers essential for high-efficiency qLDPC codes.
Sympathetic Cooling
During transport, ions accumulate motional heating, which drastically reduces the fidelity of Mølmer-Sørensen entangling gates. To circumvent the decoherence caused by direct laser cooling, researchers trap a secondary “refrigerant” species (e.g., ¹³⁸Ba⁺ with a ¹⁷¹Yb⁺ qubit). Cooling the Barium ion sympathetically cools the Ytterbium ion via Coulomb interaction without disturbing the internal quantum state [74].
Photonic Modalities: Fusion-Based QEC
Photonic quantum computing circumvents decoherence limits by utilizing traveling photons. Rather than operating via a sequential circuit model, it leverages Measurement-Based Quantum Computing (MBQC) on massive entangled “cluster states.”
In PsiQuantum’s Fusion-Based Quantum Computing (FBQC) model, computation is driven by destructive multi-photon parity checks known as fusions [75]. Loss, the primary error channel in photonics, is managed not by active gate correction, but by treating lost photons as “erasures” within a highly redundant 3D topological defect network [76].
The Control System Bottleneck: Cryo-CMOS
Scaling to 1 million physical qubits introduces a catastrophic thermal bottleneck due to the heat load of millions of coaxial cables spanning from room temperature (300K) to the mixing chamber (≈ 15 mK) [77].
The solution lies in Cryo-CMOS technology, integrated controllers operating at the 4K stage. By multiplexing digital control signals and processing error syndromes locally within the cryostat, systems can overcome the I/O latency constraint, effectively bridging the final physical gap in the hardware-software stack.
The Decoding Bottleneck: Real-Time Error Arbitration
A critical, often overlooked component of the hardware-software gap is the classical processing power required to interpret error syndromes. For a quantum computer to be “fault-tolerant,” it must identify and correct errors faster than they propagate.
Decoding Latency and Coherence Time
The decoding problem, mapping observed parity-check violations (syndromes) to the most likely physical errors, is typically solved using algorithms like Minimum Weight Perfect Matching (MWPM) or Union-Find.
- Superconducting Bottleneck: With gate times in the nanosecond range (10–100 ns), the classical decoder must resolve the error graph within microseconds. This necessitates hardware-level decoders (FPGA or ASIC) integrated directly into the Cryo-CMOS stack.
- Ion/Atom Advantage: These modalities have longer coherence times and slower gates (millisecond range), providing a more generous “time budget” for complex classical decoding algorithms.
Modality Comparison
| Property | Superconducting | Trapped Ion | Neutral Atom | Photonic |
|---|---|---|---|---|
| Leading vendors | Google, IBM | Quantinuum, IonQ | QuEra, Pasqal, Infleqtion | PsiQuantum, Xanadu |
| Gate speed | 10-100 ns | 1-100 us | 1-10 us | ~ns (measurement-based) |
| 2Q gate fidelity | 99-99.5% | >99.99% [78] | ~99.5% | Architecture-dependent |
| Connectivity | Nearest-neighbor (2D grid) | All-to-all (within chain) | Reconfigurable (mid-circuit) | Modular / networked |
| QEC cycle time | ~1 us | ~10-100 us | ~1-10 ms | Architecture-dependent |
| Scalability path | Foundry fabrication | Modular linking | Optical tweezer arrays | Semiconductor photonics |
| QEC code demonstrated | Surface code (below threshold) | Color code, various | Color code, hypercube code | Fusion-based (theoretical) |
| Key advantage | Speed, manufacturing maturity | High fidelity, connectivity | Reconfigurability, scale | Room temperature, networking |
| Key challenge | Cryogenics, limited connectivity | Speed, scaling beyond ~50 qubits | Atom loss, gate speed | Loss rates, determinism |
Table N: Summary of Quantum Computing Modalities
| Modality | Gate Speed | Scalability | Below-Threshold QEC? | Example Architectures | Notes |
|---|---|---|---|---|---|
| Superconducting | Fastest (10-100 ns) | Challenging (requires cryogenic cooling; 2D grid limits connectivity) | Yes (Google Willow, 2024) | Google Sycamore/Willow, IBM Eagle/Heron | Most mature platform; benefits from existing semiconductor fabrication infrastructure |
| Trapped Ion | Slow (1-100 μs) | More challenging (chains limited to ~50 ions; modular linking adds engineering complexity) | Yes (Microsoft/Quantinuum, 2024) | Quantinuum H-series, IonQ Forte | Highest gate fidelities (>99.5% 2Q); all-to-all connectivity within a chain eliminates routing overhead |
| Neutral Atom | Moderate (1-10 μs) | Less challenging (optical tweezer arrays scale to thousands; demonstrated 3,000+ qubits) | Yes (QuEra, 2024) | QuEra Aquila, Pasqal Fresnel, Infleqtion Sqorpius | Most rapidly advancing modality; reconfigurable connectivity; first logical Shor’s execution (Infleqtion, 2025) |
| Photonic | Fast (~ns, measurement-based) | Less challenging (room temperature; naturally modular and networked) | No | PsiQuantum, Xanadu Borealis | Theoretical runtime advantages for certain cryptanalytic workloads (2-20x); least experimentally mature |
| Spin | Moderate (10ns - 1μs, depending on target) | Less challenging (1-4 Kelvin, Mature CMOS production) | No | Intel Tunnel Falls, Diraq Crossbar, QuTech QARPET | Fabricated on standard 300mm CMOS lines, diversity in realization (Si or NV center) |
The Performance Layer: QCVV and QEM
For any hardware modality, the bridge between raw physical qubits and reliable computation is built on two pillars: Quantum Characterization, Verification, and Validation (QCVV) and Quantum Error Mitigation (QEM) [79].
QCVV: The Diagnostic Framework
QCVV is the rigorous scientific process of knowing what you have for hardware. Quantum states are fragile and cannot be directly observed without collapsing, QCVV uses indirect statistical methods to build a high-fidelity model of the system’s behavior.
- Characterization: Identifying specific noise parameters, such as T₁ (relaxation) and T₂ (dephasing) times, or gate fidelities via Gate Set Tomography (GST) [79, 80].
- Verification & Validation: Ensuring the hardware is on the right path and double-checking itself. This involves benchmarks like Randomized Benchmarking (RB) [81] and Quantum Volume (QV) [82] to provide a holistic score of the system’s operational capacity.
QEM: The Remedial Layer
While QCVV diagnoses the noise, Quantum Error Mitigation (QEM) works to suppress its impact on final results without the massive qubit overhead required for full Fault-Tolerant Quantum Error Correction (FTQEC). QEM is essential for the current NISQ (Noisy Intermediate-Scale Quantum) era [83].
- Zero-Noise Extrapolation (ZNE): Intentionally scaling up noise in a circuit and then extrapolating back to the “zero-noise” limit to estimate the ideal result [83].
- Probabilistic Error Cancellation (PEC): Using a known noise model (derived from QCVV) to apply a “quasi-probability” distribution that cancels out errors across an ensemble of circuit runs [83].
Key Distinction: QCVV provides the data, the identity and cause of errors, whereas QEM provides the action, the cleaning techniques. Together, they enable hardware modalities to reach “quantum utility” long before physical hardware is perfectly noise-free.
The Rise of qLDPC Codes: Beyond the Surface Code
Quantum Low-Density Parity-Check (qLDPC) codes, such as Hypergraph Product and Bivariate Bicycle codes, are emerging as a high-efficiency alternative to the traditional surface code by offering constant encoding rates that significantly reduce the physical qubit overhead . While the surface code’s 2D planarity is hardware-friendly, its zero encoding rate in the thermodynamic limit necessitates a massive footprint, whereas Bivariate Bicycle codes can encode 12 logical qubits into just 144 physical qubits [62, 63]. However, implementing these codes requires non-local connectivity, making reconfigurable modalities like Neutral Atoms and Trapped Ions more suitable candidates than superconducting circuits [84]. Furthermore, the transition from hardware-intensive Magic State Distillation (MSD), which consumes considerable quantities of the system’s qubits, to perform many algorithms. Additionally, Magic State Cultivation (MSC) shifts the engineering challenge from raw qubit quantity to control complexity, adding nuance to quantum computing in practice [85]. This shift allows for the internal “growth” of non-Clifford resources but demands sophisticated real-time control and dynamic code-switching to maintain logical integrity.