Until recently, I’ve viewed VSAs (virtual storage appliances) as creatures of the lab, where I’m constantly building, and tearing down, configurations to test one product or another. I’ve saved many hours, and countless thousands of dollars, by spinning up as many as a dozen virtual iSCSI disk arrays when the need has arisen. I’ve started taking them more seriously as I plan my doctor friend’s upgrade from a Windows 2003 infrastructure to the promised land of virtual servers.
Despite the fact that he runs several walk-in urgent care centers, my friend thinks more like a doctor than the CEO of what is in reality a multimillion dollar business. Like most SMB owners, he spends more time on product and/or service delivery than on budgeting and, like most SMBs, that means he’ll run a server until it dies--or at least until I tell him it’s going to die any day now.
That day arrived, and I started planning the new systems. Since today’s Nehalem-based servers are 20 times as powerful as the 700MHz Xeons in his current system, I figured we’d consolidate the eight servers he’s running now down to a pair of new Dell R510s using VMware Essentials Plus. While everything could have run on one server, the thought of putting all the eggs in one basket (not to mention the length of the outage if that server failed) led me to a pair. With a pair, we could use vMotion to simplify hardware maintenance, as well.
Of course, using vMotion requires shared storage, and there we hit a snag. My first choice, EqualLogic arrays, are just too expensive for this environment. Even a low-end disk array like an HP MSA or Dell MD3200 would cost more than $10,000 with six 300-GByte SAS drives, and that’s for a single controller model. Going to a dual controller system, which of course the enterprise storage guy inside me says is required, would add another $3,500 or so.
But if I installed a VSA with synchronous mirroring, like HP’s P4000 (Lefthand), StorMagic, Falconstor NSS or Starwind HA, in each server, I could leverage the built-in RAID controllers, eliminate the storage controller as a single point of failure and theoretically improve reliability by storing two copies of all the data.
On the down side, I’ll give up some performance and host resources to the replication, but Dr. George’s I/O requirements are pretty light, and we have CPU to burn in the new config. Now I just have to choose a solution.