VMware brought long-awaited storage improvements in the most recent version of vSphere, most significantly VSAN and the Flash Read Cache. However, several significant promises remain unfulfilled.
It’s time for VMware to upgrade its support for file storage (as opposed to block storage) and embrace the pioneering vendors who are building storage systems specifically for the virtualization environment.
File-based storage makes sense for virtualization. The hypervisor presents virtual disks to the virtual machines it hosts. It stores those virtual disks, and the rest of the information it stores about the VM, as files. Because functions like vMotion rely on shared storage, VMware had to create a clustered file system, VMFS, to allow multiple hosts to access the same SAN volumes. Before VAAI, this lead to severe limitations on how many VMs could be stored in a single datastore/volume. It still results in some complexities for administrators.
As a result, managing vSphere with NFS storage is somewhat simpler than managing an equivalent system on block storage. Even better, a good NFS storage system, because it knows which blocks belong to which virtual machine, can perform storage management tasks such as snapshots, replication and storage quality of service per virtual machine, rather than per volume.
Recognizing that we have to make a transition to the virtual machine as the unit of storage management VMware has for years been talking about vVols, but there was no vVol news at VMWorld this year. A vVol is essentially a micro-LUN, where each virtual disk of each virtual machine is stored on the SAN array as a separate volume so the array can provide functions like snapshots or replication on a per-VM basis.
[Storage startup Coho brings OpenFlow to storage. Get details in “Coho Applies SDN To Scale-Out Storage.”]
We can’t do this today because the block I/O protocols require the initiator (host) to log into the target (array) for each volume they mount, and there are limits to the number of logins the array can support at any one time. So we build datastores that put multiple VMs in a single volume because the array, or more accurately the protocol used to access the array, can’t support more than say 1024 connections.
Also, vVols require storage vendors to make some significant changes to their systems to support micro-LUNS and the demultiplexer function. My best guess is that vVols won’t really hit the market in a form users can put in production for another two years or more.
Better NFS support will empower storage vendors to innovate and strengthen the vSphere ecosystem and fill the gap until vVols are ready. NFS support will also provide an alternative once vVols hit the market.
The first step would be for VMware to acknowledge that NFS has advanced in the past decade. Today vSphere supports version 3.0 of NFS—which is seventeen years old. NFS 4.1 has much more sophisticated security, locking and network improvements than NFS 3.0. The optional pNFS extension can bring the performance and multipathing of SANs with centralized file system management.
I think VMware should also extend the NFS version of VAAI to support the per-VM snapshots now starting to appear on storage systems from vendors including Tintri, Simplivity, Sanbolic, Nutanix and even VMware’s own Virsto. With VAAI integration, the storage system snapshots could completely replace VMware’s log-based snapshots for vStorage API for Data Protection backups.
While I’ve heard rumors that VMware wants the future of vSphere storage to be either VSAN on server-attached storage or vVols to EMC storage, I hope it can be a bit more liberal in its view and upgrade vSphere’s NFS support. While I’m making requests, adding SMB 3 would make sense too, but that’s probably a bridge too far.
Disclaimer: Simplivity is a client of DeepStorage, LLC. Tintri has provided equipment for use in DeepStorage Labs.