Over the years, DRP (disaster-recovery planning) and its cousin BCP (business-continuity planning) have generated a variety of methods for protecting infrastructure components like networks, servers and software, even in complex, n-tier client-server configurations. However, data remains at significant risk. This is because physical infrastructure protection and recovery strategies are based on redundancy or replacement. But data cannot be replaced, cutting the odds by 50 percent. The only way to protect it is redundancy--make a good copy and store it safely out of harm's way.
Some claim that disaster recovery focuses on IT infrastructure replacement, while BCP focuses on business-process continuance. Others argue that DRP is an oxymoron, saying, "If you can recover from it, how can it be a disaster?" This school claims that BCP is more reflective of the goals of the activity and has a more positive psychological impact (read: more politically correct).
At the end of the day, we don't care a whit what you call it--DRP, BCP or EIEIO. It all means the same thing: Avoid preventable interruptions and develop strategies to cope with interruptions you can't prevent.
The first step is to copy your data. That's easy enough, right? Au contraire. In data replication, many factors add complexity and cost. Time, for example. Copying takes time--less if the copy is made to disk, more if to tape. Top tape-backup speeds achieved in laboratories today hover at about 2 TB per hour, assuming a sturdy interconnect, a state-of-the-art drive, perfect media and a well-behaved software stack. Disk-to-disk copying takes a fraction of the time required by tape, though this, again, is a function of interconnect (usually WAN) robustness, array-controller efficiency and many other factors.
Then there's geography. You want the copy of your data to reside far enough away so it won't be consumed by the same disaster that interrupted normal access to the original. With tape, this is no problem. The portability of removable media means you can make a local copy, then ship it off site for safekeeping. In disk-to-disk, the copy must be directed to an off-site target platform across a network interconnect on an ongoing basis.
Opinions vary on what constitutes an acceptable distance between source and target in disk-to-disk copying, but be aware that the greater the distance between the original disk platform and the remote platform, the more the data on the two devices is out of sync. This is called the delta in disaster-recovery parlance, and data deltas can be the difference between boom or bust. More to the point, deltas can determine whether the remote copy of your data can be used to restore application processing in the event of an interruption. Crash consistency is the shorthand used to express this concept.