In the last few years, data centers have undergone tremendous amounts of technological change. One of the areas of greatest impact on IT has been the increased adoption of virtualization. In fact, according to a recent survey by IDC, 66% of workloads are now virtualized. But the storage on which those workloads reside was designed for the physical world, introducing a major disconnect in the data center.
Because of this disconnect, many data center professionals are feeling pain. Earlier this year Tintri surveyed 1,000 data center professionals, who named their top pain points as performance (51%), capital expenses (41%) and manageability (39%). This pain makes sense -- the increased number of virtual workloads has generated far more random I/O patterns that overwhelm disk-centric storage.
In an effort to improve performance, storage admins find themselves spending time shuffling virtual machines from one storage LUN or volume to another. If an application performs poorly, there’s no way to pinpoint the cause since the storage performance data is at the LUN level. This lack of VM-level visibility makes it difficult, if not impossible, to identify the issue and deal with it.
To solve performance problems, flash seems like an easy solution. It is low latency and can handle random I/O. In fact, a single commodity solid-state drive is 400 times faster than a hard disk drive. As performance and latency issues grow, IT has responded by overprovisioning large amounts of short stroked drives, adding flash caching, and finally adding all-flash arrays. But adding large amounts of conventional flash alone buys time, not a solution. There are three reasons why flash alone is insufficient to resolve the three most significant storage pain points.
Performance. There’s an assumption that all-flash will immediately eliminate latency. Unfortunately, it’s not a certainty. The irony of most conventional all-flash systems is that while the hardware is state-of-the-art solid-state storage, they are built on the exact same logical architecture as the outdated disk storage systems they are replacing. And that means they still handle I/O requests in sequential queues based on LUNs and volumes, where mission-critical workloads can get stuck behind huge databases, volumes of persistent desktops or other workloads -- and so latency persists. Each of these VMs needs its own lane where it can perform unencumbered.
Capital expense. Plenty has been written about rapid reductions in the price of flash, but the fact remains: It’s flat-out more expensive than disk. So the problem with buying conventional flash is that you’re undoubtedly using a portion of that footprint for workloads that could be just as adequately served by disk. What you really need is the ability to optimize the location of your workloads across a mix of all-flash and hybrid-flash to maximize your value.
Manageability. Conventional flash storage is still built on a foundation of LUNs and volumes. Managing those devices requires deep storage expertise and comfort with RAID types, data striping, queue depths and more. If you are looking to assign accountability for storage to the teams that use it, such as the test and development team or VDI unit, flash storage will prove difficult to manage. In an era where two-in-three workloads have been virtualized, it’s counter-intuitive to be using the same management constructs as were used for physical workloads.
The bottom line is that flash alone solves symptoms -- surface-level storage pains. Instead, organizations need to seek out solutions that address the root cause: The disconnect between recently virtualized workloads and outdated storage architectures.
To close the gap, storage that specifically supports virtualization is key. With that degree of granularity and control, companies can solve the performance and manageability pain that plagues conventional storage