Recently I met with an organization with data storage and protection needs that were just starting to approach "enterprise" class. They invited me in to go over their various backup options and try to provide some guidance on what path they should take. Like most other datacenters that are working their way up to enterprise class, they are struggling to meet backup and recovery windows and are finding their old "SMB ways" falling short.
As I usually find when meeting with IT professionals, they were on the ball and had a high level of personal pride in making sure they were protecting the organization's data assets. My sense of the meeting goal was (and this is a typical situation for me to find myself in) that they were hoping I had the magic answer, the one product that would solve all of their data-protection needs. Unfortunately, that was not the case. Like most datacenters their needs will probably require two or three data-protection methods before it is all said and done.
[New hard drives are coming onto the market. But why? See Last Gasp For Hard Disk Drives.]
The first step for any data-protection overhaul is to try to segment the application and/or server population into what I call data-protection zones. These are groups of servers that are running in a similar environment and occasionally running a similar application. In this case, they had already virtualized 80% of their server infrastructure. There also was an MS-SQL cluster that was never going to be virtualized, and an Exchange environment that was being considered for outsourcing, but certainly had to be protected until that decision could be made.
With this work out of the way, the next step was to establish recovery point objectives (RPOs) and recovery time objectives (RTOs) for each protection zone. In general, the virtualized environment can be covered by the same set of objectives, but users need to be on the lookout for "special" VMs like virtualized, but mission critical, database applications. The high level of virtualization, the overwhelming majority of which were Windows VMs, led me to guide them toward a VM-specific backup application to protect that environment.
The MS-SQL cluster was another story. It was big in capacity, almost 1 TB of database data, and it was highly critical to the business. As you might expect, they were throwing everything at the problem, including scripts, real-time replication, and storage system snapshots. While not automated and not by the book, it was working for them, but they knew it was fragile.
With time running out in our initial meeting, my immediate recommendation was to document all the data-protection processes that were occurring on the MS-SQL environment and have an established recovery plan so everyone on the IT team knows what data to recover in a given failure. You don't want to be in a situation where you end up restoring the wrong copy of data because you didn't know that a fresher copy existed.
At our next meeting and in my next column, I will document some recommendations for solving their MS-SQL data-protection challenges, which include virtually eliminating the recovery window and being able to stand up the server in the event of a disaster.
Solid state alone can't solve your volume and performance problem. Think scale-out, virtualization, and cloud. Find out more about the 2014 State of Enterprise Storage Survey results in the new issue of InformationWeek Tech Digest.