IBM is promulgating a storage management concept that it calls a “storage hypervisor” though the final product name has not been determined. The company claims the technology will offer benefits – such as better storage utilization – leading to better storage cost economics, and data mobility – leading to increased flexibility, such as non-disruptive storage refreshes – and parallels the acceptance of server hypervisors and virtualization. But there are also broad implications about how storage will be deployed and managed under IBM’s hypervisor solutions and strategy that you may find worthy of attention. Let’s see why.
Last week’s IBM Pulse 2012 conference in Las Vegas had the imposing sub-title “Optimizing the World’s Infrastructure.” And the company attacked a broad range of both physical and digital infrastructure issues under the now-familiar – at least to the IT world – integrating concepts of Smarter Planet and Smarter Computing. But Pulse attendees wanted not only an overview of the big transformational trends in IT, but also to be able to drill down in their areas of particular expertise so that they could return home with a game plan or set of action items based on what they had learned. Rather than going into breadth of coverage from Pulse 2012, I will concentrate on an area of focus for me – storage management – to illustrate that type of specialization.
Storage management was such an area of particular attention for specialists, with emphasis on the concept of the storage hypervisor that IBM is working diligently to stress. Now, all IT roads lead to storage – as without data processors and networks can do no useful work and all data not in transit has to reside on some form of storage medium – so deploying and managing storage more efficiently and effectively is critical, not only to today’s storage infrastructure operations, but also as a cornerstone to moving to the cloud that offers true IT-as-a-service.
Enter the storage hypervisor in general, and IBM’s Storage Hypervisor in particular. Now the term “storage hypervisor” is not generally accepted currently in the IT industry, as only two smaller companies – DataCore and Virsto – in addition to IBM, seem to be advocates of the term. Moreover, other terms, such as “virtual storage,” may be used instead with different approaches that yield the same essential capabilities. Still, after understanding what it does, the term should provide a good mental recall mechanism for understanding what is happening, and should happen, to the underpinnings of a hypervisor-enabled storage infrastructure.
For simplicity’s sake, I’ll focus on what IBM is offering. Note that while “storage hypervisor” may be a concept, IBM implements the concept through real products. The company believes the storage hypervisor is a combination of application software that performs the necessary storage virtualization functions, and management software that provides the centrally, automated framework for all virtualized storage resources. The “actor” software underlying the whole thing is IBM’s System Storage SAN Volume Controller (SVC), and the “director” software is the IBM Tivoli Storage Productivity Center (TPC). To this, IBM also adds the IBM Tivoli Storage FlashCopy Manager, as it feels that the special snapshot capability incorporated as part of the storage hypervisor is an essential ingredient.
Now, the first question that many might ask is: Isn’t the storage hypervisor simply a re-bundling of existing IBM products? While the answer at first blush would be yes, a little closer examination says that the synergies that this new combination brings might not have happened if the products were used individually. Moreover, putting the combination under the rubric of storage hypervisor better aids in understanding what it does, its benefits and the larger implications.
Obviously, the use of a storage hypervisor invokes the concept of the server hypervisor in the minds of CIOs and other IT professionals. The server hypervisor – and server virtualization – were once relegated to enterprise-class mainframe computing environments, but are now considered a “good thing” – albeit with some caveats, perhaps – in server systems of most every type. Although storage virtualization has been around for a long time, it has not realized the same level of attention or success that has occurred on the server side. That has to change, as explosive data growth and tight budgets can no longer keep up with just the falling cost of storage alone. Thus, IBM’s storage hypervisor may provide a mental rallying point around which a next stage in storage infrastructure evolution can take place.
A storage hypervisor creates a single pool of managed storage that can span across multiple storage arrays or even JBODs (Just a Bunch of Disks) boxes. Now, virtualized storage (even in a single array) divides up storage in a SAN array differently than the traditional method. Traditionally, shared storage in a SAN means that each application is allocated a portion of the available physical storage, based on a guess of what it will need over time.