Andrea Danti - Fotolia
Backup best practices are always evolving
Backup technology doesn't stand still, so the preferred methods for running an efficient backup environment must also adapt.
After four years of writing this column, I'm struck by how much has changed--and how much has remained the same--in backup and recovery. The first column I wrote discussed the do's and don'ts of tape library selection, which is still a relevant topic but not necessarily the kind of material a topical publication would focus on. Since then, new technologies have emerged and backup seems to have been examined from every conceivable angle.
The foundation for the most important changes in backup technology can be summed up in a single word: disk. Low-cost, high-capacity disk storage has become the enabler for a variety of technologies that are redefining backup operations. Some of these advances, such as virtual tape libraries (VTLs), represent evolutionary enhancements to the traditional backup process, while others like continuous data protection and single-instance storage are potentially far more transformational.
Nondisk technologies have also had an impact on how we architect backup solutions. These include dramatic increases in tape capacity and speed, more widespread availability of low-cost network bandwidth, and server technologies such as virtualization.
Backup and recovery best practices continue to evolve. Data capacity demands, recovery expectations, and new application and business requirements all have a bearing on how we manage data protection. Toss in the new technology options, and it's worth re-examining some of our considerations for best practices.
Disk changes everything
Incorporating disk into the backup environment seems to be every backup administrator's goal, but the best method to accomplish this is subject to debate. The options for adding disk to backup were initially limited to the rudimentary support included with backup software products. Aside from IBM Tivoli Storage Manager (TSM), which was designed around the paradigm of a disk-based storage cache, most backup products lacked optimized support for disk. This made the concept of VTLs a particularly attractive way to incorporate disk into the environment.
However, backup software vendors have scrambled to enhance the level of disk support in their products. Newer options, like EMC's NetWorker DiskBackup Option, handle disk like the random-access device it is and provide multiple, concurrent access and improved performance. In Veritas NetBackup 6.0, Symantec has added features like disk watermarks to better support the TSM-like, disk-as-cache model.
In the NAS arena, the tighter integration of Veritas NetBackup with Network Appliance has yielded more advanced disk backup capabilities like the Veritas NetBackup NAS SnapVault Option, which essentially provides long-needed integration between backup and NAS snapshot management. Not to be outdone, EMC's NetWorker offers a NAS PowerSnap Option and extends its NDMP agent to support disk as a target device. Because the use of snapshots and split mirrors has long been a popular method of creating images to be backed up, being able to control and manage that functionality through the backup application is highly desirable and reinforces this approach as a best practice.
For most organizations, VTLs are the most straightforward and least disruptive option for incorporating disk into backup. And we now have enough collective experience to begin to suggest VTL best practices, including some that contradict traditional tape-only practices.
With traditional backups, it was considered extremely risky to extend the period between full backups. With disk, there's no such concern, and doing fewer full backups actually extends the capacity of the VTL, which means potentially retaining more versions of data on disk.
In the IBM TSM world, the disk storage cache was traditionally sized to accommodate at least one night of incremental backups that would later be migrated to tape. The idea was to avoid frequent disk-to-tape migrations that would slow the backup process. With VTLs, TSM still benefits from having a disk storage cache, but the sizing rule no longer applies. The cache can now be a fraction of the size and migrations to the VTL aren't a concern in most cases.
Stop the tape
Despite the attractiveness of disk, organizations with sizeable quantities of data still depend on tape. New generations of tape technology are appearing more frequently but, ironically, the speed and capacity increases aren't always good news. Replacing older technology with new high-performance drives raises expectations of dramatic improvements. In practice, the opposite is often the case.
Tape is a serial technology, so it needs a steady stream of data to maintain its performance. If the data stream can't keep up, performance drops dramatically as the tape drive stops, waits for data, backs up, writes, stops again, etc. This shoe-shining effect also reduces tape drive and media life.
Often, performance problems attributed to "slow tape drives" are actually caused by bottlenecks elsewhere in the data path, and introducing even faster drives actually exacerbates the problem. To improve performance, more data has to be written to fewer drives; for most backup apps this means increasing multiplexing settings.
Does this make multiplexing a best practice? No. Multiplexing is a tradeoff--it may improve backup time, but it degrades restore time. Multiplexing may be a "necessary" practice to deal with the limitations of a given technology in a particular environment, but it's not the preferred way to deal with the problem. A preferred approach is to identify and analyze performance bottlenecks, and to address them where possible. If needed, disk may be introduced as a staging device to keep tape streaming, but data from multiple sources doesn't need to be interleaved via multiplexing.
The virtual environment
Backup best practices are also affected by what's backed up. The widespread adoption of virtualized servers has significantly changed the server world. Virtualized environments, such as those supported by VMware, impact backup.
Backing up a virtualized server can be identical to backing up a physical server. It's possible to install backup client software and perform a traditional backup across the network. But what if 20 virtual servers reside on one physical server and they all try to back up at the same time? This can be a significant load-balancing concern, not to mention having to buy 20 client licenses for a single physical server.
It's also possible to back up a virtual machine as a single disk image. On a VMware ESX Server, each virtual machine exists as a file that can be backed up. It's therefore possible to back up all virtual servers as part of a physical server backup.
Individual virtual server backups allow for easy restore of files for normal operational restores, while image backups enable fast restores of an entire system for DR or migration purposes. It should be noted that in addition to backing up the individual virtual machines, the Linux-based VMware service console must also be backed up.
From a software perspective, the major backup apps provide support for a backup client running on a VMware virtual machine and on the service console, but it's critical to validate the appropriate version. If support for specific app agents is required, check with your backup software vendor to ensure compatibility.
Managing virtual machine backups on a multitude of physical machines can become complex. To address this, VMware Consolidated Backup was introduced with VMware Infrastructure 3. This product centralizes management of VMware backups, and supports quiescing of a virtual machine in addition to integrated snapshotting to a backup proxy server where both file-level and image backups can be performed. It provides much of its functionality through pre- and post-processing scripts that can be executed in conjunction with standard third-party backup client software.
Given the adoption rate of virtual servers, it's reasonable to expect further enhancements from existing apps and additional alternative approaches to protecting these environments. So, at this stage, best practices are still evolving.
Regardless of the technology selected, the fundamentals still apply when it comes to backup best practices: Policy, process and metrics are the keys to ensuring successful and recoverable backups.