If you are backing up desktops or workstations, centralized backups are an excellent way to make sure your data is properly protected. Rather than relying on users to perform the backups, a centralized backup solution, such as a
Centralized backup is generally a good thing, but there is no one-size-fits-all solution. While there are many different approaches from many different vendors, deciding which approach is best for you and selecting the right vendor requires careful consideration.
Centralized backup is so important today that most storage management companies, including Computer Associates International Inc., Veritas, and EMC Corp., and suppliers like Arkiea Corp. and CommVault Systems Inc., offer options for this process. A number of vendors, such as IBM Corp., offer several different products aimed at different parts of the market. Designs range from third-party backup services to remote sites to local backup over the company LAN.
In fact, the biggest problem an enterprise faces in choosing a centralized backup system is narrowing down the choices. Once you have settled on a solution type and decided how complete you want your purchase to be, you'll find that you only have a manageable number of vendor choices left.
What is the backup topology?
One of the key factors in selecting a centralized backup product is the backup topology. The simplest and most vendor supported topology uses a single backup system to back up all the enterprise data. When dealing with a number of branch offices, consider using a third-party service, such as MSI International's RemoteStor. Another option for backing up a distributed enterprise is to do it yourself using an application like LiveVault's InControl, which is designed to back up multiple remote locations.
What is the available technology?
If your enterprise has a SAN, a serverless backup offers some distinct advantages. It doesn't overload the corporate network, and it backs up at SAN speeds. Generally, a properly configured SAN can handle the additional load of centralized backup without a major investment in additional equipment. This is assuming, of course, that the SAN hardware is intelligent enough to handle serverless backup. (Keep in mind that a 'serverless' backup over a SAN uses the SAN switches or directors to handle the backup. In effect, the SAN is the server.)
This option usually only applies if you already have a SAN. Although the prospect of serverless SAN backup can be an argument in favor of installing a SAN, it is seldom enough to justify getting a SAN.
At the other end of the spectrum are the remote backup systems that work over the Internet, usually by establishing a virtual private network (VPN). Bandwidth and the time and cost to handle the backups are the concerns here. Most remote backup schemes use a delta backup, either at the block or file level, to back up only those items that have changed since the previous backup. An additional concern with third-party backup services is that the cost depends on the amount of data backed up.
Generally, centralized backups over the Internet require careful attention to controlling what is being backed up and eliminating junk data before backing up. Some backup systems, such as InControl, can groom the data, eliminating inappropriate file types. Most systems will only back up designated files or folders.
Turnkey or component?
Centralized backup products vary enormously in their completeness. Backup services, such as MSI, generally handle everything. Your company designates which files are to be backed up and the service does everything else.
Other companies offer complete packages of hardware and software, such as IBM's Centralized Backup and Restore Solution. Many vendors, such as Veritas, just sell the software and let you select the hardware and other components.
In comparing the cost of the different options, it is important to compare total cost of ownership (TCO). It is particularly important to figure in the cost of administering and managing backups, including the cost of setting up the system. Another cost that must be carefully considered involves providing the bandwidth necessary to make the system meet service levels.
About the author: Rick Cook has been writing about mass storage since the days when the term meant an 80 K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last 20 years, he has been a freelance writer specializing in storage and other computer issues.
This was first published in June 2005