Just a few years ago, every x86 server I specified for a client included a hardware RAID controller. Over the years, vendors brought the cost of SAN-attaching servers down and server virtualization made shared storage a better idea, so we shifted to SAN storage. I'm starting to think we should take another look at running RAID in the server now that we can combine a RAID controller that will use SSDs for caching and a virtual storage appliance to make it shared storage.
I wrote last year in "The Case for VSAs" about how I was planning to use a virtual storage appliance in my doctor's data center, where the performance requirements were relatively low. Running the StorMagic VSA on a pair of what turned out to be three vSphere hosts is working well for Dr. George, and I get to sleep at night knowing that the two StorMagic VSAs synchronously mirror his data between the two VSA-equipped machines. It's safe from hardware failures since there are two copies of all the data, and if there's a problem I don't have to rush over to restore data from backup.
While we could have used a single controller array like Overland's SnapSAN S1000 or Dell's MD3000 it would have cost around $8,000 and left the whole system vulnerable to a controller failure. That would have had the whole system offline until the failed component was replaced. The StorMagic VSA was just $2,000 and the cost of the hard drives for a significantly more reliable solution.
As the folks from LSI were briefing me about their latest Nytro MegaRAID, which combines a SAS RAID controller and PCIe SSD, I realized that VSAs may not just be for SMB and remote-office applications that can live with SATA performance. By using a RAID controller that can use SSDs for caching, we should be able to deliver a level of performance better than any low-end shared array can, and save some money, as well.
SSD caching isn't limited to the Nytro MegaRAID. LSI and Adaptec have been offering SSD caching as an option for their SAS RAID controllers for a while now, and the current versions of LSI's CacheCade and Adaptec's maxCache can not only use an SSD attached to the RAID controller as a read cache, but also cache writes to an SSD or mirrored pair of SSDs. Since most server vendors OEM their RAID controllers from either LSI or Adaptec, you might even be able to buy the caching option and SSDs for the controller that's already in your server. While some server vendors hide their OEM relationships, Dell has come out and said that its customers can add LSI's CacheCade to Dell's PERC H700 and H800 controllers, which are OEMed from LSI.
In our experience, adding SSDs that are 5% to 10% the size of the disks in the system as a cache can result in a significant performance boost, as 70% to 80% of system I/Os are served from the cache. Since the SSDs are 10 to 15 times as fast as even 15K-rpm drives, the investment in SSD caching can boost your performance by a factor of five or more.
Total cost for a pair of controllers, four midrange SSDs like the 256-Gbyte Crucial M4 and two copies of CacheCade should be on the order of $3,500. The total cost with StorMagic VSA and eight 2T-byte drives would be less than $16,000 or $3 per gigabyte--a true bargain for the level of performance and reliability it delivers.
Hopefully, I'll be able to set something like this up in the lab and run it through its paces.
Disclaimer: LSI, Adaptec and StorMagic are not clients of DeepStorage LLC.