The problem has become particularly acute as backup has become more complex. It used to be that the full backup was used every time an enterprise needed to recover a single file or folder that had been accidentally deleted, which happens fairly often. Now file and folder recoveries are handled by separate systems like Windows System Restore and backups aren't resorted to unless there's a major problem, which is much less common.
All file recovery utilities have the ability to check blocks as they are written to disk and tape, usually by reading back some kind of checksum. But that by itself is inadequate. For one thing, it doesn't test the whole backup system. And it doesn't spot files and folders that should be backed up and aren't for some reason. Testing needs to be a separate part of backups.
Test your backups regularly
According to the Symantec study, most large companies test DR plans at least once a year. Simple backups should be tested much more frequently -- at least once a quarter and whenever there is a major hardware or software change to your backup system. It's particularly important to run a test after upgrading the firmware in your backup system to make sure the new firmware works properly with the rest of your system.
As much as possible, your test should duplicate the conditions you will face when you need to actually restore. The ideal situation would be to do a complete restoration of all your data to a second system with an identical configuration. This isn't always possible, of course, but you should test as much of the backup as you can on as much of the backup system as feasible.
If possible, test on the hardware you will be restoring to. This is especially true if you will be restoring to a different machine than the one that created the backup. Some actual backups are surprisingly picky about the systems they restore to. This becomes a particular problem if you have to restore to a system other than the one that created the backup. Some backup systems expect the hard drive to be exactly the same size as the one the backup was taken from. Not only is this a problem in a DR situation, but it can also be a problem if you've had to replace a failed hard drive. Drive technology is advancing so quickly that today's standard size may be hard to find in a couple of years.
One way to handle the problem is to restore to a virtual machine. Virtualization software vendors like VMware Inc. can configure virtual machines to mimic existing hardware, including disk sizes and other configurations.
Testing should consist of more than simply poking around. For example, if you just restore a couple of files you can't be sure that your directory trees and other features are working as they are supposed to. When you test a restore, take a minute to study the directories to make sure everything that should be backed up is actually backed up. The test should include restoring entire folders, complete with subfolders, as well as one or more critical applications.
Every critical application should be tested regularly, if not on every test. Pay special attention to complex applications. Microsoft Exchange, for example, is a particular problem because of its complex database structure. (An Exchange database is actually several linked databases.) To be sure your backup was successful you need to test all of Exchange. Some things are very difficult to test in their natural environment. Companies like Anue Systems Inc. offer network emulators that let you simulate your network for testing purposes.
About the author: Rick Cook specializes in writing about issues related to storage and storage management.
Do you have comments on this tip? Let us know.
Please let others know how useful this tip was via the rating scale below. Do you know a helpful backup tip, timesaver or workaround? Email the editors to talk about writing for SearchDataBackup.com.
This was first published in September 2008