Over the past several years, a range of competing packet backplane solutions have been proposed - ranging around the CPCI framework, as well as competing high-availability bus standards such as VME. All these solutions share the notion of running an asynchronous, switched network across the backplane of a chassis, and extending this network via optical or copper cabling to other chassis, creating a large virtual machine in which all components (or at least all functionality subsystems) can be, to some extent, mutually aware.
Getting five-nines or higher availability out of such a system, of course, depends on how its hardware is designed, and on how the firmware and software running on the hardware works - and of course, on how much money is available.
Most of today's competing packet backplane standards can be deployed several ways, depending on how resources are to be arranged in a finished product. At present, a "CPU or dual-CPUs, plus resource boards in a chassis" model still dominates; as a result, state-of-the-current-art packet-backplane systems are more and more being designed in star topologies.
The star topology - or its extension, the "dual star" - reflects and improves upon a generation of thinking about redundant, split-backplane system designs. There's general agreement, however, that the best (if also the most expensive, short-term) solution for system throughput and reliability will be a full-mesh architecture, where every component has an uncontended path to every other component in the system.
ENTER ADVANCEDTCA