Published: 01 Oct 2013
With more data to protect, storage administrators need to look at backup alternatives to supplement or replace their standard process of weekly fulls and nightly incrementals.
Plenty of IT shops still perform nightly incremental backups along with weekly full backups, but many organizations are increasingly finding that their data -- and the recovery requirements for that data -- are breaking the backup models they've relied on for so long. For storage managers addressing inadequate backup operations, this may mean confronting the difficult but critical task of backup modernization.
Backup modernization can be a somewhat painful process; you not only need to choose a backup technology, you need to consider the impact the transition will have on key business processes and requirements.
Backup alternatives to consider
When it comes to modernizing your backups, there are many solutions available ranging from the mundane and utilitarian to the exotic. Even so, there are three main flavors of data protection in use today:
- Continuous data protection (CDP)
- Image-based backups
CDP technology protects data on a nearly continuous basis. Rather than running a large monolithic backup overnight, CDP products back up data every few minutes, 24 hours a day.
CDP products work by initially replicating data to a disk-based backup on a block-by-block basis. The software then monitors data for changes to the stored blocks or the creation of new blocks. When a block is created or modified, it's backed up. An index tracks versioning information and data deduplication ensures only unique blocks are stored on the backup media.
Snapshots aren't the same thing as backups because they never create a copy of the data. Instead, snapshots provide you with a way to roll a virtual machine (VM), file or application back to an earlier point in time. Snapshots can be based on the use of differencing disks or pointers. Because snapshots aren't actually backups, some backup vendors offer snapshots as a way of augmenting their product's recovery capabilities rather than offering snapshots as a standalone protective mechanism.
Image-based backups represent one of the newer approaches to backup, and are used to back up VMs. The idea behind this type of backup is that the backup process captures a VM as a whole. If a recovery operation is required, then a copy of the VM is usually mounted in a sandbox environment so the data can be extracted. The sandbox mounting capability is also sometimes used to provide native recovery testing capabilities or even virtual lab capabilities. Image-based backups offer tremendous flexibility as long as your protected resources are all on virtual servers.
Don't ditch that legacy backup app yet
When it comes to replacing outdated technology, it's often a matter of "out with the old, in with the new." For backup infrastructures, however, it would be foolhardy to immediately dispose of your legacy backup system. Most organizations have data retention requirements, so the legacy backup hardware and software needs to remain at least until the last backup that was created with it ages beyond the required backup retention.
When the backups created using your legacy backup system outlive their useful lifespan, you'll need to find a way to securely dispose of the legacy backup media. If the old backup system is tape based, for example, you may be able to demagnetize and recycle the old tapes. There are also services available that will physically destroy outdated backup tapes and other media.
Important business considerations
Regardless of what type of backup technology you choose to implement, there are some critically important considerations to take into account with regard to your organization's business needs. Some of these factors need to be considered before you purchase a new backup system; others need to be taken into account once the new backup process is in place.
Retention requirements. One of the first things you need to think about when choosing a modern backup alternative is your backup retention requirements. In other words, how far back in time do you need to go to retrieve data?
The reason why this is important is because most modern backup solutions are disk based, cloud based or both. Tape-based backups provide a nearly unlimited retention span because you can back up to tape and then keep the tape for as long as you like, which isn't necessarily the case with disk-based backups. Disks have a finite capacity, and that capacity will impact the total amount of historical data you're able to retain within your backups.
Even if disk capacity weren't an issue, some modern backup applications impose synthetic limits. For instance, some CDP products differentiate between short-term protection (disk) and long-term protection (tape), and place very strict limits on the total number of recovery points that can be stored within short-term protection.
Agent compatibility. If the backup solution you're considering is agent based, then agent compatibility must be a major consideration prior to purchase. Although most of the major backup application players offer agents for the most popular operating systems, you need to verify that agents exist for the operating systems you're running in your own environment.
Another consideration that's sometimes overlooked is compatibility with future operating systems. For example, Windows Server 2012 R2 is soon to be released. Some backup vendors already offer support for the new operating system, but others don't. If you plan to migrate to Windows Server 2012 R2 in the near future, you'll need to ensure that any backup vendor under consideration will support the new operating system.
Application awareness. Application awareness is one of the most important criteria in selecting a backup application. If you're backing up anything other than file data, your backup software must support the applications you're running.
For CDP or image-based backup products, ensuring application awareness usually means verifying that the backup product includes a Microsoft Volume Shadow Copy Service (VSS) writer for the applications running on the servers you're backing up. In the case of snapshot products, however, you'll need to look for granular application rollback capabilities.
Although most snapshot utilities support rolling back the entire server, this can have disastrous consequences for database applications because snapshots don't capture transactions stored in the server's memory at the time the snapshot is taken. As such, snapshot rollbacks can cause database corruption unless the snapshot product is specifically designed to work with the application running on your server.
The initial backup. After you've purchased and implemented a modern backup solution, there are some things you'll need to consider regarding your first backup. Because the new backup system has never been used to back up your data, it will have to start by making full backups of everything. Depending on how much data you have on hand, this initial seeding process can be quite time-consuming.
In addition, your servers may be left vulnerable during the initial seeding process. For example, let's say the initial backup takes three days to complete and a server fails a day and a half into the backup process. You may or may not be able to recover the server depending upon what has been backed up at that point.
One tempting solution to this problem might be to run your legacy backup software in addition to the new backup software. However, this can cause problems. For example, if both products manipulate the archive bit on file data then the backups can become confused as to what they have/have not backed up. Similarly, database applications such as Microsoft Exchange require backups to commit transaction logs to a database. Running two separate backup products can result in missing log files, which can impact your ability to perform a restoration.
A better solution is to implement the new backup solution gradually, rather than initially configuring it to back up every resource in your entire organization. For instance, you might start by backing up one application or one virtual machine at a time. You can still safely use your legacy backup product to back up everything else. This approach minimizes the amount of data that is left vulnerable to loss during the transition process. It also greatly reduces the strain that the initial seeding process places on your network and storage infrastructure.
Recovery testing. This is an important part of the data protection process in any organization. However, recovery testing becomes even more important if you have recently transitioned to a new backup solution.
Once you begin the transition process, you should perform recovery testing on at least a weekly basis for the first six months. During the first few months the new backup system is in use, you're likely to make adjustments to the backup configuration and these adjustments can sometimes have unexpected consequences. The only way to verify that you're still being protected is by regularly testing your ability to perform recovery operations.
Modernizing a backup infrastructure may not be as simple a process as some backup vendors would lead you to believe. There's a lot of work that goes into choosing backup alternatives and then making sure they're properly implemented and are adequately protecting your data. But once you're past the initial phases of implementation and verifying operations, you'll have a much more flexible and scalable data protection process in place.
About the author:
Brien Posey is a Microsoft MVP with two decades of IT experience. Before becoming a freelance technical writer, Brien worked as a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the nation's largest insurance companies and for the Department of Defense at Fort Knox.