This is the first column in a series about building your own IT storage. I’ll discuss why and how I’ve done it, look at the technological and business drivers behind my decision, and share how I weighed the risks and benefits of the DIY route.
My company is an ISO-certified medical device manufacturer. We run three locations in two different countries, including a large manufacturing plant in the Dominican Republic. We have an IT staff of five full-time employees in both countries, and support a diverse application portfolio that includes the usual things like email, Web collaboration and multisite distributed file sharing.
We have some specific applications that are I/O- and resource-intensive, including Epicor's E9 enterprise ERP software. We also run specialty software for ISO compliance, quality management and product labeling.
Like other IT professionals, I've got a variety of storage needs. It seems like no matter how well I scale up, there's always a shortage of something. Sometimes it's space--15TB of backup, anyone? Other times it's I/O or latency. There are plenty of products on the market that can meet these needs, but they're often outside the reach of my budget.
I mostly run a Dell shop. Dell quotes me consistently lower prices on the same commodity hardware than sources such as CDW or PCMall, and some real deals are possible at the end of the quarter (with a little negotiation). Plus, I get enterprise replacement and support contracts.
That said, Dell going private with the support of Silver Lake Partners makes me wonder if those end-of-quarter deals are going to evaporate. That, in turn, makes me think I need other options.
Recently, I started shopping to fill the remainder of a Dell SAN, the MD3220 SAS unit. Dell wanted about $430 for a 1TB Seagate constellation disk. I dug into the Dell docs to find the model, and, sure enough, that same disk is available on Newegg.com for $250 (without any volume discounts or deals). Through MALabs I can get it for $225. That's a $205 in savings, less than half Dell's base starting price. Even aggressive negotiation with Dell wouldn't get me those kinds of savings.
Then I looked into purchasing SLC SSD disks for the same array. Talk about sticker shock--Dell wanted $3,000 for one, even though I can get the same drive online for $1,500 bucks. I also discovered I can get a high-grade MLC SSD with even better specs for $680 through a different distributor.
[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]
But there's a problem, and this is ubiquitous in the SAN industry. You can't add commodity drives, even the same model as the vendor sells, because the drive won't have the vendor's firmware. Downloading the manufacturer's firmware update utilities and trying to flash the drive with supported vendor firmware doesn't appeal to me. I'm confident I could reverse-engineer the firmware update to a point where it would work, but I'd be out of support, and possibly mistrustful of the modified drive's integrity. It's not the way to host enterprise apps.
I don't like getting screwed by unjustifiable margins, so I decided it was time to do things differently.
Enter DIY
Around this same time, a former client of mine had returned a "budget" server I built for a four-person office. The server has $125 Zotac motherboards with integrated Atom 330, 1.6GHz dual core with 4GB (the max) memory. The Zotac ITX board has four SATA channels, two of which were running RAIDed 500GB drives for file storage and OS, and one running a SATA CD ROM. The net cost of this "server," minus Windows licensing, was about $350 with drives. If I had just bought the board, chassis and memory, it would have been a measly $225.
This happened at a serendipitous time at the office. I was unsatisfied with how some enterprise apps were being backed up, and I wanted more storage space, but my budget really demanded some smart buying.
So, I bought four 3TB 3.5-inch commodity SATA disks ($500) and a dual-port Supermicro PCIe network card ($80) to complement the Nvidia nForce network card that came with the Zotac. The board also had a mini PCIe slot intended for use with a Wi-Fi adapter. SIL Technologies makes a two-port SATA board for mini PCIe, so I bought that, too ($49).
Now I have six channels of SATA2 connectivity, 12TB of raw space and two server-grade iSCSI NICs that I can use receive side scaling to distribute across the Atom's whopping dual 1.6GHz cores. Add two more drives for a total of 18TB raw space, and a bigger PSU for plenty of power.
Here's the total cost, excluding the $225 in parts I reused: board, memory, chassis ($225); 18TB of 3TB SATA ($750); network card ($80); SATA PCIe mini card ($50); and power supply ($50). My grand total was $1,155 in hardware.
This is hardly a mission-critical storage array. But it's entirely suitable for auxiliary backup capacity and non-critical VMware ISO storage. If the thing dies, I'm not going to be upset. In the meantime, having the extra storage available via iSCSI is certainly a nice option.
The question then became the following: What OS can I put on this unit that will provide solid software RAID (because two of the drives are on a different card and I'd prefer Raid 50) and reliable performance for iSCSI multipath dual active network connections?
There are a lot of options, and some serious pros and cons involved. In my next column I'll look at the SAN software landscape and explain why I decided that Windows Storage Server 2012 was my OS of choice ... for the moment.