Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Open Compute Racks: Are We Going to Use Them in Our Data Centers?: Page 2 of 2

Commodity servers: Open Compute has specifications for AMD and Intel servers including motherboards, chip sets and the chassis. It's surprising how similar these are to traditional servers in terms of components and selection. A contract manufacturer could produce these without any electronics design team, thus reducing the cost to production and QA testing. What would a server like this cost? I don't know, but I'd hazard about one-third the price of a conventional server--getting three servers for the price of a single conventional model is seriously motivation. With a good virtualization setup, you would have plenty of spare server capacity in case of hardware failures--even more than a single spare. That's a practical option.

Of course, you might not use these types of servers for every workload in an enterprise data center, but the 80/20 rule suggests that 80% of your servers could be these commodity systems. You might buy these in larger volumes, but it would still be cheaper overall. A lot cheaper.

Will Vendors Support It?

The conventional view is that few enterprise IT teams are willing to adopt radical ideas. In my view, it's certainly practical to use nonstandard racks and power systems, since we already do it today. Products such as IBM mainframes, HP NonStop and EMC VMAX arrays are examples of custom racks and hardware that already infest the physical data center and cause major problems in data center cooling, power distribution and weight management. It's not much more work to consider using Open Compute racks as a new template if they're cheaper and more efficient than other options.

The remaining question is whether vendors will produce products that are compliant. I'd say yes. Vendors will follow the money when there is enough customer demand. Open Compute will almost certainly be adopted by large corporate customer looking to reduce capex for new build, and they can drive vendor engagement--that would be the first phase. Once products reach the market, then availability could drive a second wave of adoption in the much larger market of midsized data centers. Change in the physical center is seriously overdue, and this is a step in the right direction.

The specifications are published and released into the public domain in the form of openly available documents on Git Hub.