Meanwhile, server clustering and other OS/software-based high availability strategies have become an increasing focus of Unix and Linux system development, with Windows not far behind. For all but the most mission-critical applications (this would include most conventional Internet apps) software/OS-based server clustering and redundancy/fallback provides more than adequate availability for large enterprise and even service provider needs.
The growing importance of server clustering and software high availability, of the "decomposed" vision for communications apps, and the emergence of host media processing, have all conspired to supercharge the development of true "blade servers" for the telecom marketplace. In recent months, Intel, IBM, HP, Sun and other vendors have fielded significant new products in this space, or "hard announced" initiatives this year.
A blade server is a chassis designed to support (house, power, cool, network, monitor and control) large numbers of discrete single board computers, plus monitoring equipment, power and cooling, mass storage arrays and other adjuncts - doing so in far less space, with less power dissipation, less need for cooling, and easier maintainability than typical rackmount "pizza box" servers.
Most current architectures put Gigabit Ethernet across the backplane in a star configuration, and include a high-speed switch for uncontended communications. All feature quick power-down and hot-swap of blade components.
The most impressive are NEBS, and/or ETSI-compliant for telco central office use. Some systems, like those made by Cubix, are radically scalable: you can buy a Cubix development platform with a single SBC for scarcely more than you'd pay for a standard server, develop your app against a perfect mimesis of Cubix' management and monitoring framework - then swap that SBC (populated with your application) into a large-scale chassis with dozens of peers and be assured of absolute hardware/software compatibility between development and deployment platforms.