Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Storage Challenges In The Virtualized Data Center

When your data center faces complicated operational problems, slow performance or other frustrations, that’s time and money going down the drain. The finger pointing is often aimed at the storage team. Really, it’s not the storage team’s fault, it’s a byproduct of the fact that storage was originally designed for physical workloads, but today it needs to work well as an element firmly entrenched in a highly virtualized data center.

The problem is that storage originally designed for physical workloads, with logical unit numbers (LUNs) and volumes that might house tens or hundreds of individual virtual machines (VMs), cause resident VMs to fight for a limited pool of resources. It’s a phenomenon called the “noisy neighbor.”

One common solution is to throw more flash at the problem, but even an all-flash storage architecture dedicated to LUNs and volumes does not necessarily overcome the following pain points of managing virtual workloads:

Scale:  If virtual data demands are growing, a knee-jerk reaction might be to over-provision storage --buying more storage as a buffer to maintain sufficient performance. Alternately, an effort to simplify operations using a hyperconverged environment might require the scaling of compute and storage in unison, even if only one resource is required but not the other. A virtual environment such as a private cloud with 100,000 virtual machines requires the storage infrastructure to manage dynamic groups of VMs and apply storage policies as more VMs are added.

Performance: Each vDisk and VM has a specific I/O performance requirement, independent of capacity. Virtual workloads might require monitoring not only the storage, but also the server host or network. The underlying storage should provide each VM its own IO “lane” so that the virtualization or private cloud environment gets application-optimized performance for the use cases, whether they’re mixed server, desktop or cloud workloads.

Manageability: Storage admins commonly maintain a complicated spreadsheet that is necessary to map all VMs to their respective LUN(s) or volume(s) within a conventional storage infrastructure. As VMs are shuffled between LUNs to preserve performance, the admin is caught in a downward spiral to spreadsheet hell. Purchasing more conventional storage also adds more VMs and more management burden. Automating quality of service (QoS) at the VM level would remove the need for manual tuning. This type of storage policy management means that virtual desktops can be mixed with server applications and cloud workloads on the same storage infrastructure. 

With these challenges, many simply feel stuck and resign to the fact that managing LUNs and volumes all day is just the “new normal.” In reality, storage admins have choices here. Instead of postponing the problem with more purchases, admins and IT leaders can start with identifying ways to overcome the limitations of physical storage. They can wrestle back control of scale, performance, and manageability while balancing physical and virtual application needs. And then, when performance is consistently speedy and folks want to know who’s responsible, people are welcome to point fingers at the storage team.