Dell put serious thought into the server's internal design. Even getting inside the box is easy--simply turn a thumbscrew on the machine's back and take off the two lockable top covers. To service the memory cards and floppy disk/CD-ROM drive controller, for example, you flip butterfly levers on the cards to raise them out of the connectors without removing them from the system. This technique reduces the possibility of electrostatic-discharge damage. Next, remove the six hot-plug fans in the center of the system.
Once the covers are off, you can actuate two levers on the front of the system to slide out the processor tray. This easy-to-use design has some advantages over more traditional methods of processor removal. You can end up with all the fans lying about, but this is a minor inconvenience. The processors and heat sinks are held down by a large, hinged metal cover. The heat sinks simply sit on the Xeon MP processors, which are secured into ZIF (zero-insertion force) sockets below the heat sinks.
The PowerEdge 6650's two 900-watt power supplies use large, nonstandard power cords (the HP system does too). This lets the unit take in 110-volt standard power, even fully loaded. As with so many of the other components, a lever releases each power supply. The dual independent power sources allow you to plug the server into more than one power grid, for fault tolerance. Unfortunately, Dell's clever designers faltered here: The power supplies must be removed from the top, rather than from the front or back of the unit. IBM makes its power supplies replaceable from the rear, and HP's can be taken out of the front.
The PowerEdge 6650 has eight expansion slots. One is a legacy 32-bit PCI 2.2 slot, for additional backward compatibility. Three of the remaining PCI slots have a PCI-X bus to themselves; the other four share two PCI-X buses. The onboard NICs also share a PCI-X bus. Although this machine has even more slots than the much larger IBM system, the Grand Champion HE chipset provides only six PCI-X buses to work with, so even with an extra PCI-X slot, the aggregate bandwidth you need must not exceed the available bandwidth.
Dell was the only competitor to include two on-board Gigabit Ethernet NICs, which reside on the rear multi-I/O card. This card also contains the two USB ports, serial port, and PS/2 keyboard and mouse ports. The rear card is convenient, but you can't hot-swap it. You can set the NICs for failover and load-balancing, but if both NICs fail, you cannot swap on the fly. To add hot-swappable, removable NICs, you'll need to disable the on-board ones. Dell says it put the NICs onboard to provide more usable PCI slots. For all but the most mission-critical applications, the onboard NICs will serve nicely.