As flash evolved from being the technology equivalent of caviar or truffles -- a luxury ingredient that we use more as a seasoning than to satisfy our appetite for storage capacity -- to become lobster and prime rib, I expected the hybrid storage market to shift towards systems with ever-increasing amounts of flash.
Most hybrid storage systems today, from hyperconverged solutions like VMware Ready Nodes and the late, not really so lamented, EVO:RAIL to dual-controller hybrids from Tintri, have roughly 10% of their total capacity as a flash performance layer with the remaining 90% provided by spinning disks. If 10% flash was good when flash cost 20 to 50 times as much as spinning disk, wouldn’t more flash be better -- or so I thought -- now that the price gap has closed to just 10:1?
Now I’m thinking that the customers who would have been well served by a flash-heavy hybrid with 40% or 50% of its capacity as flash will probably opt for all-flash arrays instead. Don’t get me wrong; I’m not joining the chorus singing “Ding-dong, the disk is dead,” or even arguing that the hybrid era is or should be over. I’m just saying that the high-performance segment is inevitably going all flash.
My revelation came as I was thinking about VMware’s recently announced VSAN 6.2. This new version of VSAN includes several improvements in storage efficiency including compression, data deduplication, and data protection via erasure coding. When I ran some rough numbers, it seemed that an all-flash VSAN was less expensive than a hybrid if we assumed even a modest degree of data reduction.
I’ve taken some of my fellow analysts to task for overstating the impact of data reduction on the relative cost of flash vs. disk by assuming that these technologies can only be used with flash. It’s true that reading data from a deduplicated storage pool generates a lot of I/O operations as the data is “rehydrated” from its constituent data blocks, which are spread across the data store, not stores in the order they were written. As anyone who’s ever restored data from a deduplicating backup appliance can attest, this can be a real performance problem when using the low IOPS 7200RPM disks that are actually enough cheaper than solid-state drives to matter.
On the other hand, the folks at Storwize proved that compression actually speeds up access to data on spinning disks if implemented well, which after all was the trick that got IBM to buy Storwize in the first place. Ideally, a hybrid storage system would compress and dedupe data in its flash layer and just compress it on disk to minimize the downside of a flash miss.
That ideal hybrid would get about half the data reduction of an all-flash array that used the full array of reduction technologies on all its storage. We don’t however live in Candide’s best-of-all-possible worlds and many storage vendors, including VMware, only make compression available on their all-flash systems.
Since the write traffic in an all-flash system can be leveled across all of the system’s capacity, all-flash systems can use less-expensive SSDs that use a lower level of over provisioning. While these SSDs -- like Samsung’s 4 TB PM863 that Newegg is currently selling for roughly $2,200 -- cost about 10 times as much as a 4 TB 7200RPM drive, the cost of hybrid media is dominated by the high-performance SSDs.
For a hypothetical 10% flash system with a 1.2 TB high -performance SSD like an Intel DC S3710 and three 4 TB disks, the SSD accounts for two-thirds of the total cost. By the time we hit 30% flash, a few expensive SSDs plus cheap disk costs the same as providing all the flash from low-endurance SSDs. By the time we hit 40% flash, we could buy the drives for a 10% high-endurance flash/90% low-endurance flash hybrid at the same cost.
Add in factors like the 4 TB SSDs being in a 2.5” package where the 4 TB hard drives are 3.5” and that customers will rightfully pay a premium for an all-flash system’s more predictable performance, and I have to say I was wrong. There’s no real place for a hybrid with 40% or 50% flash.
Meanwhile, hybrids with just a bit of flash will still be the right solution for some organizations, such as many of the midsize companies that were my clients in my consulting days. Much of their data is relatively cold like everyone’s file server, and there’s significant value in maintaining one storage system. They can use the storage system’s QoS to make sure their applications perform properly without needing a clever DBA to store 10 years of sales orders in their ERP system in an archive table space on disk and the active data in flash. In larger enterprises, a relatively small flash layer can store metadata and accelerate the access to related data in an object store or bulk storage system.
Learn more about the changing storage landscape in the Storage Track at Interop Las Vegas this spring. Don't miss out! Register now for Interop, May 2-6, and receive $200 off.