Price does not tell you enough anymore
Storage is already complex and is getting more and more complicated. The new software features work magic. One can dramatically change the behavior of a storage system with every new technology and feature added. As with anything in life, there is a trade-off, though. It is getting tougher to estimate the actual behavior of a storage system.
The typical way of buying storage used to be straightforward: "Give me a solution that can deliver 20 TB of raw capacity and 20,000 IOPS of performance." Not anymore. With software features like thin provisioning, caching, tiering, and snapshots and clones, one can considerably change the output parameters of the storage system. Furthermore, the impact of each feature will depend on the customer's use case, including the type of data and pattern of their workload. It is getting progressively challenging to predict the impact of a feature on the actual results a storage system will deliver. Now you need to run a proof of concept or a pre-deployment test just to get a good estimate of what your solution will actually deliver.
In addition, big vendors like to push a particular buzzwords or a feature as the solution to every problem. Perhaps the most over-used term we come across is "deduplication." While this is a great feature, it has been portrayed as the solution for reducing the physical storage footprint of your data. However, there are a number of other features that achieve the same goal, some of which have much bigger impact -- for example, thin provisioning, snapshots and compression.
Educated customers should make decisions based on the actual benefit they require and should not insist on a particular technology to deliver these results. As we stated above – the data and use case have significant impact on which technology can deliver the most benefits. Customers do not need features; they need solutions to real business problems.
(Image: visual7/iStockphoto)