Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Facebook's Data Center: Where Likes Live: Page 3 of 3

The small operating staff is seldom discomforted. All network connections and servicing of the equipment is done from the front of the racks -- the cold aisle. None is done from the back, the hot aisle. During our visit on Feb. 20, the hot aisle was about 72 or 74 degrees, mainly because the temperature outside was 30, with snowflakes in the air. In the fan room on the roof, most of the big exhaust fans were idle and the surplus heat was going into heating the cafeteria, office space and meeting rooms.

Facebook has applied for a patent on how it steps down 12,500-volt power from a grid substation to the server racks. It brings power at the level of 480 volts into the data center to a reactor power panel at each cold aisle of servers. It delivers 240-volt power to three banks of power supplies on a server rack. The process eliminates one transformer step, an energy-saving move since some power is lost with each step down in the conversion process. Most enterprises lose about 25% of the power they bring into the data center through these steps; Facebook loses 7%.

Not every idea implemented at Prineville was invented by Facebook. Facebook executives give some credit to Google for the idea of a distributed power supply unit with battery on the server, as opposed to operating at a central point where power feeds into the data center. The difference is that the conversion from alternating current to direct, and back to alternating (which cost 5% to 8% of power at predecessor data centers) was cut to a much smaller percentage at Google data centers. The conversion was required to ensure that battery backup was fully charged and ready to go the instant the grid supply failed. Google found a way around that power penalty by distributing battery backup to each server.

But Facebook is happy to take credit for its own innovations as well. And perhaps more importantly, it's publishing the details of its power conserving servers in the Open Compute Project and opening up its data centers for wide inspection.

Crass, at an athletic 36 years old, garbed in a Facebook hoodie, jeans and sneakers, seems like he would be as much at home posting his latest achievements on the surfboard or ski board to a Facebook page as managing its massive complex day after day. But he says it's the job he was cut out for.

Much of the real work of managing the facility is done by software regulating the air flow and monitoring the systems. The servers themselves, he said, are governed by a system that can invoke auto-remediation if a server for any reason stalls.

"Maybe a server is really wedged and needs a reboot. The remediation system can detect if the image is corrupted on the drive and can't reboot. Then it will re-image the machine" with a fresh copy, he explained. No technician rushes down the cold aisle to find the stalled server and push a reboot button. The remediation system "just solves most problems," he said.

Crass isn't allowed to offer a count on the total number of servers currently running, so he explains "tens of thousands," when asked. For purposes of comparison, Microsoft built a 500,000-square-foot facility outside Chicago that houses 300,000 servers. Reports on the capital costs for one building at Prineville show a total expense of $210 million, but that's not a total for the fully equipped building. Microsoft and Google filings for large data centers in Dublin, Ireland, show a cost between $300 to $450 million.

The Prineville complex sits in the middle of a power grid that pipes hydroelectric power from the Bonneville and other dams in the Northwest to California and Nevada. Visitors pass under a giant utility right of way that consists of three sets of towers not far from the Prineville site.

The mega data center is a new order of compute power, operated with a degree of automation and efficiency that few enterprise data centers can hope to rival. For Crass, it's the place he wants to be. He and his wife lived in Portland before he took a job on a project in Iowa. Given the option to take on Prineville, he jumped at it. He knew it would be an implementation of the Open Compute architecture and a working test bed for its major concepts.

"I love it. It's an amazing place to work. It's open to everybody. You're able to be here and walk through it and take pictures," he noted at the end of the tour. Everybody likes to be running something cool and letting the world know about it, he said.

The Prineville data center incorporates the latest cloud server hardware, a huge picture storage service and a lean staff, Crass points out. For at least a while, this complex sports the best energy efficiency rating of any major data center in the world, and the lessons being learned here will reverberate through data center design into the future.


Attend Interop Las Vegas May 6-10 and learn the emerging trends in information risk management and security. Use Priority Code MPIWK by March 22 to save an additional $200 off the early bird discount on All Access and Conference Passes. Join us in Las Vegas for access to 125+ workshops and conference classes, 300+ exhibiting companies, and the latest technology. Register today!