A new era in quantum computing is taking shape as IBM unveils Starling, a system engineered to perform meaningful quantum calculations while detecting and correcting errors in ways that would be unattainable with classical methods alone. The company positions Starling as a milestone that moves beyond simply counting qubits toward building functional compute units capable of reliable operation. IBM projects that Starling will be able to perform about 100 million operations without error on a foundation of 200 logical qubits and aims to have the system ready for use by 2029. Alongside this ambitious hardware vision, IBM is detailing the intermediate steps that will bridge today’s experimental progress to Starling’s full-stack capabilities, including modular processors designed to host error-corrected qubits and the engineering work required to stitch these units into a large, working quantum computer.
A New Phase in IBM’s Quantum Roadmap
IBM’s latest disclosures mark a clear shift in how the company talks about and plans for quantum computing. Rather than emphasizing the tally of hardware qubits alone, IBM is outlining a pathway to assembled, error-resilient compute units that can be linked to perform complex algorithms. This transition represents a fundamental reorientation from “qubits in isolation” to “functional computation on integrated units.” If the integration approach works as intended, Starling could be assembled by connecting a sufficient number of these units, each acting as a cohesive quantum computing core rather than a mere collection of qubits.
This shift is accompanied by a candid acknowledgment that the major science questions around error correction have been substantially settled in principle, with engineering challenges taking center stage. IBM Vice President Jay Gambetta described the updated roadmap as a set of highly precise deliverables, intended to translate theoretical error-correction concepts into a concrete, manufacturable architecture. The company asserts that the ground has shifted from solving the science questions to solving the engineering puzzles required to scale up a practical, error-corrected quantum system. The message is intentionally ambitious: the engineering problem is well-bounded enough to pursue with a concrete plan and timeline.
Starling’s core ambition is to deliver a compute environment that can perform a substantial amount of meaningful quantum work without being derailed by errors, a capability that has eluded scaling attempts for many years. The goal hinges on indexing progress not merely by how many qubits can be fabricated, but by how many logical qubits can be stabilized and used for real computation. The practical implication is that IBM intends to define the project in terms of compute units rather than a silo of hardware qubits, making it possible to scale by aggregating units rather than chasing a single gargantuan monolith. The emphasis on units also implies a modular development process, where a unit’s design can be validated, improved, and then integrated with others to form a larger machine.
From Qubits to Compute Units: The Architecture Concept
A central theme of IBM’s Starling plan is a rethinking of quantum error correction in a way that aligns with modular, scalable compute units. The company explains that error correction in quantum hardware entails entangling a designated set of data qubits in a configuration that distributes quantum information across multiple qubits, alongside additional qubits dedicated to monitoring the system’s state. These monitored qubits generate syndrome data through weak measurements, which scientists interpret to detect if and where an error has occurred and to guide corrective actions.
Error-correcting codes are a family of strategies for embedding data qubits into larger structures of physical qubits. The essential idea is that, by sacrificing some hardware resources, one can gain protection against errors that would otherwise rapidly degrade quantum information. In practice, the specific code chosen matters: more physical qubits tied to the code generally enhance robustness and increase the number of logical qubits that can be realized on a given device. Different quantum hardware platforms have varying degrees of flexibility for hosting these codes. Trapped ions and neutral-atom approaches, for instance, can often rearrange qubits to access a broad range of entanglement patterns, albeit at the cost of operational overhead for moving particles into position.
IBM’s superconducting qubits, fabricated as chips with fixed wiring between qubits, operate under different constraints. The chip’s wiring geometry is established during fabrication, constraining which error-correction codes can be efficiently implemented. This rigidity has historically made it challenging to accommodate error-correction schemes that require highly interconnected qubits, particularly those demanding dense or nonplanar connectivity. To address this, IBM’s current processors employ a “heavy hex” wiring topology—an arrangement that minimizes crosstalk and preserves coherence by limiting problematic interactions among neighboring qubits. Yet Starling’s approach calls for an error-correction code incompatible with the heavy hex geometry, prompting IBM to pursue two major advances.
Advances in Packaging and Interconnectivity
IBM’s first key advance centers on chip packaging, where the company has introduced multi-layer wiring that sits above the stainless hardware qubits. This architecture enables the dense, high-connectivity interconnects required by the LDPC (low-density parity-check) code that IBM is pursuing. By lifting many connections to layers above the chip, IBM can realize the wiring complexity needed for robust LDPC-based error correction while preserving the integrity of the underlying qubits. The packaging development will first appear in a processor named Loon, part of IBM’s development roadmap. As Gambetta put it, the team has demonstrated the trio of high connectivity, long-range couplers, and “couplers that break the plane” to connect distant qubits, and the next step is to demonstrate these features together as a single packaging solution.
A visual contrast accompanies this development: the left side of IBM’s presentation shows a simple layout of connections in the current-generation Heron processor, while the right side presents the more intricate web of wiring projected for the Loon device. The Loon project is slated to be publicly demonstrated later in the year, offering a tangible step toward the dense interconnects required by LDPC-based codes.
The second major advance involves eliminating the crosstalk that the heavy-hex geometry was designed to suppress, enabling the adoption of an error-correction code that requires a different connectivity pattern. IBM is pursuing a “square” array that minimizes crosstalk while accommodating a different connection scheme than the heavy hex. Gambetta described this development as a near-term experimental platform called Nighthawk, intended to dramatically increase qubit density and reduce the overhead of performing quantum calculations. The goal is to achieve a higher qubit count with much lower error overhead, enabling more operations per unit of hardware and lowering overall resource requirements.
Nighthawk: A Closer Look at Near-Term Hardware
Nighthawk represents IBM’s near-term, user-facing hardware roadmap, designed to accelerate progress toward the more ambitious LDPC-based Stirling approach used in Starling. In 2025, Nighthawk devices are expected to be released, with annual iterations through 2028 that progressively expand the number of error-free operations each device can perform. The plan envisions each Nighthawk processor hosting 120 hardware qubits; in 2026, three such processors could be chained together to function as a single unit, delivering 360 hardware qubits in aggregate. The subsequent year, 2027, would see a machine with nine linked Nighthawk processors, lifting the cumulative hardware qubit count beyond 1,000.
This pipeline serves multiple purposes: it provides a practical demonstration of increasingly capable quantum memory and processing capabilities, while also enabling developers to experience and refine the integration challenges associated with large-scale, error-corrected quantum computation. IBM’s strategy is to use Nighthawk as a stepping stone toward Starling’s modular compute-unit concept, validating the interfaces, control logic, and software tooling necessary to operate several units in concert. The emphasis remains on achieving more operations without errors as the hardware evolves, enabling more sophisticated algorithms and longer computation timelines in a real-world setting.
The LDPC Code Family: Bicycle Codes and Logical Qubits
The heart of Starling’s error-correction strategy revolves around a particular class of LDPC codes described by IBM as a bivariate bicycle code, a name that reflects symmetries akin to bicycle wheels in the code’s mathematical structure. This family supports the practical challenge IBM faces: how to map error-correction logic onto a physical chip with feasible interconnects and manageable overhead. The bicycle code is designed to tolerate errors while allowing scalable implementation on superconducting qubits arranged in a fixed hardware layout.
IBM outlines two implementations of this LDPC scheme. The first uses 144 hardware qubits arranged to host 12 logical qubits, along with the requisite measurement qubits for error checking. The reported code distance for this configuration is 12, a metric indicating error-detection and correction strength. The second implementation scales up to 288 hardware qubits to host the same 12 logical qubits but raises the code distance to 18, which increases resilience to errors at the cost of additional qubit resources. IBM intends to deploy one of these 12-logical-qubit configurations as a Kookaburra processor in 2026, explicitly for stable quantum memory applications.
Beyond quantum memory, IBM envisions a follow-on that couples a small cluster of additional qubits to generate quantum states needed for certain operations. This composite unit combines the memory qubits with auxiliary qubits essential for a broader set of quantum manipulations, culminating in a single, functional computation unit built on a single chip. The Cockatoo chip will enable multiple processing units to be linked on a shared bus, raising the potential logical-qubit capacity beyond a dozen per unit — with one of the unit’s twelve logical qubits reserved to mediate entanglement with other units rather than to perform calculation. The progression then leads to the initial test versions of Starling, designed to enable universal quantum computations that unfold across several chips with a limited set of logical qubits.
This entire LDPC strategy is directed at achieving scalable, error-corrected quantum memory and computation, with explicit plans for how the various chips and units fit together. The LDPC codes’ relative robustness against errors translates into a practical path forward for a quantum computer capable of sustaining coherent operations across more qubits and longer computation times.
Real-Time Decoding: The Classical Backbone of Error Correction
An equally important facet of Starling’s architecture concerns the classical side of quantum error correction. Full error correction requires processing the syndrome data gathered from measurement qubits to determine the logical qubits’ state and whether any corrective action is necessary. As the logical-qubit count grows, the computational burden of this syndrome evaluation increases correspondingly. If this processing cannot be performed in real time, error-corrected quantum computation becomes unfeasible.
IBM has developed a message-passing decoder to address this bottleneck, enabling parallel evaluation of syndrome data. The decoder explores broader regions of the solution space by injecting stochastic variation into the algorithm’s past-memory weighting and by redirecting any non-optimal candidate solutions to new instances for further evaluation. The critical claim is that this approach can operate in real time when implemented on FPGAs (field-programmable gate arrays), ensuring the classical side can keep pace with the quantum subsystem and preserve overall performance.
This decoder design is not merely an optimization; it is a foundational component of Starling’s real-time error-correction workflow. It is intended to facilitate rapid decision-making about how to adjust the quantum system’s state in response to detected errors, a capability that becomes increasingly important as logical qubits and measurement complexity grow. Rapid, reliable real-time decoding is a prerequisite for scalable, error-corrected quantum computation on a practical scale, and IBM’s development team positions this decoder as a central element of the Starling architecture.
The Universal Bridge, Cold CMOS, and System Architecture
A crucial architectural detail IBM emphasizes is the linkage between each functional unit, which the company dubs a Universal Bridge. Each unit’s interconnection requires a microwave cable corresponding to the code distance of the logical qubits it links. In other words, a distance-12 code would necessitate 12 individual microwave-carrying cables to connect each chip to its peers or to the external controller. This design principle ensures that the quantum processing units can share qubits and measurement information efficiently as they are integrated into a larger system.
In parallel, IBM is pursuing control hardware that can operate inside the refrigeration environment itself, leveraging what the company terms “cold CMOS” technology. Cold CMOS is designed to function at 4 Kelvin, enabling more compact and efficient integration of control electronics with the quantum hardware and reducing latency and energy losses associated with extracting control signals from cryogenic environments. This combination — a tightly integrated Universal Bridge network, enhanced interconnects, and in-situ cold-control electronics — aims to minimize the overhead and latency involved in coordinating multiple compute units as they operate together.
IBM has also released renderings illustrating Starling’s expected physical layout: a sequence of dilution refrigerators connected by a central pipe that houses the Universal Bridge. The visualization communicates the architectural intent: a scalable, modular stack of cooling units that support a high-density, highly interconnected quantum computing fabric. Gambetta framed Starling as an architecture that is now sufficiently defined to place concrete expectations on what will be built, and he suggested that more detailed milestones would follow as the roadmap evolves.
This shift toward a compute-unit-centric architecture represents a departure from the historical emphasis on individual, isolated qubits, their connectivity, and their error rates. The current error hardware rates are approaching levels that make this compute-unit strategy viable, with Gambetta noting that further improvements are anticipated. The primary focus now is on connectivity designed to support functional quantum computation rather than solely on maximizing raw qubit counts. This reflects a pragmatic reorientation toward building a usable quantum computer through integrated units rather than chasing higher qubit counts alone.
Starling’s Timeline, Vision, and the Road Beyond
IBM’s forward-looking roadmap does not end with Starling. While Starling’s target is to deliver a system with 200 logical qubits capable of handling certain classes of problems, the company explicitly notes that this capacity will not yet be sufficient to tackle the most cryptographically challenging tasks, such as breaking current encryption standards. The broader ambition remains to push toward a far larger, more capable machine, with Blue Jay projected for around 2033 and envisioned to house roughly 2,000 logical qubits. In IBM’s current public depiction, Blue Jay appears as the next major milestone after Starling, reflecting a staged progression from a limited, error-tolerant system toward a much larger quantum computing platform.
The Starling project thus sits at the intersection of meaningful near-term progress and ambitious long-term goals. On the one hand, it promises practical computations that demonstrate error correction in a scalable, modular framework. On the other hand, it anchors a longer trajectory toward significantly larger quantum computers designed to run highly complex algorithms that could redefine what is computationally feasible. The company emphasizes that Starling’s development is a “four-year plan,” underscoring a defined period during which the core architecture, unit-level integration, and the supporting control and decoding infrastructure will mature. The broader roadmap, however, projects continued growth beyond Starling toward increasingly powerful quantum platforms in the 2030s.
Starling’s approach is anchored by a dual focus: advancing hardware architecture and delivering the software and control tools necessary to make a large, error-corrected quantum computer practical. IBM’s strategy to move from raw qubit counts to functional compute units is designed to produce a scalable, engineerable system whose performance can be improved progressively through iterative hardware and software enhancements. The company’s emphasis on precise, deliverable milestones signals a disciplined, engineering-driven path toward a quantum computer capable of performing robust computations across multiple chips, with real-time error correction and high interconnectivity at scale.
Conclusion
IBM’s Starling program marks a pivotal shift in how the company envisions building practical quantum computers. By reorienting the narrative from qubit tallies to functional compute units, IBM lays out a roadmap that emphasizes modularity, error correction, and scalable interconnectivity as the core enablers of a usable quantum system. The four-year plan centers on delivering a starling-scale architecture capable of 100 million error-free operations on 200 logical qubits by 2029, while also laying out a detailed sequence of intermediate steps — including Loon packaging, Nighthawk, and the Kookaburra and Cockatoo processors — that validate the engineering foundations needed for Starling.
The technical innovations span both quantum and classical domains: advanced LDPC codes, highly interconnected multi-layer wiring, a real-time, message-passing decoder implemented on FPGAs, and cold CMOS control electronics that operate inside cryogenic environments. These elements cohere into a coherent vision where quantum compute units can be connected to form a larger, functioning machine. The roadmap continues beyond Starling toward larger-scale systems, with Blue Jay representing a 2033 goal of roughly 2,000 logical qubits, underscoring IBM’s commitment to a multi-stage evolution toward powerful quantum computation.
If Starling succeeds, the implications extend far beyond a single product milestone. The approach could redefine how quantum computers are designed, built, and operated — emphasizing robust, error-corrected computation, modular scalability, and integrated control. The focus on compute units rather than isolated qubits signals a shift in the industry toward architectures that are simultaneously more tractable to engineer, more adaptable to scaling, and more capable of delivering real-world quantum advantage. As the roadmap unfolds, researchers and developers will be watching closely to see how these innovations perform in practice, how the software and hardware interfaces evolve, and how quickly Starling and its successor systems can translate theory into tangible computational breakthroughs.