ILM also requires a policy engine that lets users map classes of self-describing data and information from the access-frequency counter function to classes of infrastructure to create policies that automate the migration of data through the storage infrastructure. This component, with all its enabling components described above, is critical for ILM to achieve its capacity-utilization efficiency objective.
No ILM vendor yet offers the full suite of functionality described above for a heterogeneous storage environment. The reason, aside from the desire of most vendors to lock consumers into proprietary hardware and software, is that data-naming schemes have not been a development priority within the industry or the standards groups.
But Microsoft's stated objective of replacing its file system with an object-oriented SQL database in its next-generation Windows server OS, code-named "Longhorn," may present new opportunities for data naming. If all files become objects in a database, an opportunity may be created for data description. Adding a self-describing header might be as simple as adding a row above the objects in the database.
If Longhorn is a hit and administrators of the 80-plus percent of open-system servers deployed today go along with Microsoft's file-system replacement, a huge opportunity may present itself for implementing a data-naming scheme at the source of data creation. Other vendors, such as Oracle and IBM, would likely follow suit with OODB-based file systems for Unix and Linux platforms, creating additional opportunities to implement self-describing data. (Both vendors have suggested a database-as-file-system strategy from time to time in white papers since the mid-1990s.)
Until that happens, current ILM "solutions" using proprietary approaches may deliver incremental improvements in storage-cost containment. However, as with all "stovepipe" approaches, short-term gains may yield longer-term losses by locking consumers into proprietary technologies. Backing data out of a proprietary ILM scheme that has become less than efficient could cost more than loading data into that scheme in the first place.
Jon William Toigo is a CEO of storage consultancy Toigo Partners International, founder and chairman of the Data Management Institute, and author of 13 books, including Disaster Recovery Planning: Preparing for the Unthinkable (Pearson Education, 2002) and The Holy Grail of Network Storage Management (Prentice Hall PTR, 2003). Write to him at [email protected].