In spite of these and many other advantages, there is one major disadvantage to using continuous data protection. Most CDP products rely on disk-to-disk backups. The very nature of disk-to-disk backups means that your backups remain on site, thus putting the backups at risk of being destroyed along with your primary data sets in the event of fire, hurricane or similar disaster. Fortunately, most CDP backup products contain mechanisms for archiving your backups to removable media. Of course this raises the question of how you can best protect your backups.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Use long-term and short-term data protection
At the present time, there doesn't seem to be any industry-standard recommendations for protecting disk-to-disk backups. When it comes to protecting a CDP backup system, my recommendation is to use both long-term and short-term protection. Long-term protection generally consists of copying your backups to tape on a periodic basis. Tapes can then be removed, and safely stored off site.
While copying your backups to tape is important, using this as your only means of protecting your disk-based backups undermines the benefits of disk-to-disk backups. To illustrate my point, imagine that you back up all of your servers every fifteen minutes using a CDP product. Let's also assume that every night at 11:00, you copy the data from the backup server to tape. With that in mind, let's pretend that our CDP server has a catastrophic hardware failure at 3:00 in the afternoon.
In this particular situation, no production data has been lost. Only the backup server has failed, and all of the organization's file servers and application servers are still up and running. Because our backup server has failed, our file and application servers are no longer being actively protected, but let's forget about that for a moment.
In this situation, the only backups that still exist are the ones that were written to tape (unless backup server's disk arrays were not affected by the crash). What this means is that if another server failed, you could perform a restoration, but the most recent backup would have been from 11:00 the previous night. You wouldn't have the option of restoring a file to the way it was an hour ago.
This can be a big problem for organizations that have grown accustom to backing up data on a continuous basis. As such, I recommend a two-tier approach to protecting disk-based backups. One tier would be long-term tape-based protection. The second tier would be short-term disk-based protection.
For example, consider Microsoft Corp. System Center Data Protection Manager, which is designed to allow you to use one Data Protection Manager server to back up another. You can back up the server's databases and its recovery points. That way, if the primary backup server fails, then you still have an up to date backup of your backups. Some organizations even go so far as to place the secondary backup server in a separate facility and perform backups across a wide-area network (WAN) link. That way, the secondary backup server is protected against the destruction of the primary data center.
While this two tiered approach works well, the key to making it work effectively is knowing how to schedule the backups. Most organizations that back up their disk-based backups to tape perform the tape backups on a nightly basis. You may find that you have to adjust this schedule based on your corporate disaster recovery policy or on any government regulations that you are required to comply with.
A secondary Data Protection Manager server can back up a primary Data Protection Manager as frequently as every fifteen minutes. One thing to keep in mind, however, is that the amount of disk space that will be required by the secondary backup server is directly proportional to the frequency of the backups. Cash-strapped organizations may choose to decrease the secondary server's backup frequency as a way of improving performance and decreasing the storage requirements.
Another way of decreasing the storage requirements on a secondary backup server is to decrease the retention time. If you are copying your backups to tape each night, then there is really no reason why the secondary backup server needs to retain more than 24 hours worth of data. In practice, I would recommend keeping at least two or three days worth of data on the secondary backup server just in case the tape backups were to fail one night. While keeping three days worth of data online may sound like a lot, it is still far less than the two week's worth of backups that are commonly stored on a primary backup server.
As you can see, it is important for organizations that use disk-based backups to have a backup of their backups. The key to making the most of these secondary backups is to schedule them in a way that ensures that you can recover from a catastrophic failure with minimal data loss.
About the author: Brien M. Posey, MCSE, has previously received Microsoft's MVP award for Exchange Server, Windows Server and Internet Information Server (IIS). Brien has served as CIO for a nationwide chain of hospitals and was once responsible for the Department of Information Management at Fort Knox. You can visit Brien's personal website at www.brienposey.com.