Software-defined storage has generated both unbridled excitement and massive confusion. Vendors push their own software-defined storage messages, whether it's a focus on storage virtualization capabilities and the ability to manage multiple types of hardware -- including commodity hardware -- as a single pool of storage, or on delivering storage controller functionality as software and allowing IT pros to select their own, often commodity, hardware.
A common element in these stories -- cost savings from using white-box systems -- is being over emphasized, leading a number of IT professionals to miss the big picture when it comes to software-defined storage. The focus on commodity hardware as the opportunity of software-defined storage detracts from the technology's greater benefits.
So why am I saying that the industry is placing too much emphasis on white-box? After all, it seems like a great way to save money.
Here's why. When building a storage solution, whether that's a storage array, appliance, or a software offering, much, if not almost all of the value, is in the software. This has been true for over a decade. At the same time, storage systems have been using commodity components for years, including Intel processors, server memory, and similar drives. Some storage appliance makers simply install their software on a commodity server prior to shipping to the customer.
This is why focusing on just the ability to leverage commodity hardware causes too much confusion around software-defined storage. Vendors may see the answer to “software-defined” as simply changing the pricing model, perhaps charging by capacity or by the month. From the customer perspective, some may equate software-defined with building your own PC. In that situation, some additional configuration and commodity components can save you money, but not enough to make it truly compelling.
The commodity hardware arguments obscure the value of software-defined storage, which offers far greater benefits, many of which get overlooked:
- Access to new technology sooner: Server vendors tend to update their hardware on a much faster cadence than storage vendors, sometimes as quickly as every six months versus every three years for storage. Leveraging software-defined architectures can allow storage products access to faster processing and memory sooner, increasing performance.
- Faster deployments: Separating hardware and software purchasing decisions can allow storage software purchases to be delayed until an organization actually requires the capacity. When the capacity is needed, software can be deployed on already-deployed hardware infrastructure, speeding up new deployments from weeks to hours or even minutes.
- Elimination of hardware migrations: Abstracting the software from the hardware enables some storage systems to eliminate costly data migrations when transitioning to new hardware, since new hardware can be incorporated directly in the system as a single pool.
- Reduced software management costs: Traditional storage products tie storage software licenses to hardware versions. Upgrading the hardware can require all new software licenses to be purchased and managed; some SDS products eliminate this requirement.
- Cost savings via convergence: Just using commodity hardware doesn’t save as much as only using a portion of commodity hardware. Virtual storage appliances or hyper-converged systems allow storage software to reside in excess processing cycles available in virtualized server environments, providing further reductions in infrastructure costs.
There is more to software-defined storage than just the opportunity to reduce costs with white-box servers. Hopefully, I have provided a better understanding of this enigmatic technology and some of the additional capabilities to look for when evaluating storage technologies, software-defined or not.