The typical remote site for a Fortune 150 to a Fortune 2000 company is not much different from the core site of a small- to medium-sized business (SMB). The number of servers requiring backup is usually fewer than 50, data is predominantly files and email, WAN bandwidth is often expensive and limited, and few if any sites have dedicated staff for backup administration. The traditional remote site backup infrastructure (such as a server with standalone tape drives) requires a healthy dose of hands-on administration and support, yet most remote sites lack the staff for the job. Ultimately, remote sites are remote for a good reason, and this makes remote site backups one of the most challenging IT tasks. Backup infrastructure as we know it (servers, disk and tape) is changing significantly and remote sites are the first to benefit.
The primary driver behind the changing backup infrastructure is data deduplication. Deduplication is radically altering backup infrastructure, mainly because it makes disk economical and, when coupled with replication, makes WAN bandwidth consumption a minor issue for vaulting backup data to alternate sites. Deduplication stores data on disk by eliminating redundant data. Delta differencing (a third cousin of deduplication) can achieve similar results. This technology is available in a variety of forms, either through backup storage devices (virtual tape libraries, NAS, etc.), backup software, and even managed backup software solutions (either online or managed remotely). Remote sites can benefit from deduplication technologies, yet for many customers the selection of options and design alternatives can be confusing.
I recommend slowly stepping away from the technology and taking a look at your requirements first. From an operational and disaster recovery perspective, what are your recovery time objectives (RTOs) and recovery point objectives (RPOs)? Do risk or security drivers support an initiative to reduce media handling? What are the legal risks associated with backup data retention? Are any key assumptions changing (such as WAN bandwidth upgrades)? What are the constraints for the solution (technology, WAN, cost/budget, support, etc.)? Most importantly, figure out what defines success of the solution.
Next, document the current state of your backup infrastructure, and detail scale and performance requirements for each remote site, along with WAN bandwidth per site. It's important also to factor in bandwidth limiting (such as QoS settings) for data traffic. Alternative site strategies range from hub-and-spoke designs, to cascading, to the use of third-party sites. Understanding these variables will save a lot of time in refining designs.
From a design point of view, explore all of the options associated with a short list of potential vendors. They don't necessarily have to offer the same types of solution, so having a deduplication appliance, a delta-differencing software solution and a managed online backup service provider on the table helps explore a wider range of design alternatives. As you explore designs for remote site backup, keep the following in mind:
- What works now (why fix it if it's not broken?)
- Before scrapping the old, explore opportunities to retrofit existing backup infrastructure with newer technologies
- How valued are incumbent vendor relationships, and what can they offer?
- What are the high-priority problems to address (by site, by data type, etc.)
- How much technology change is realistic to deploy?
- Do different possible solutions fit different types of remote sites (by size, location, bandwidth, etc.)
- How can you maximize investment protection (not only assets, but skills, experience, and institutional knowledge)?
- Will data growth invalidate the solution architecture or budget forecasts?
- Can you build a rational business case to support the decision?
- Are there ways to simplify the designs or remove complexity?
- How can the solution designs be standardized as much as possible?
Making the decision on next-generation backup infrastructure for remote sites isn't trivial. The long-term architectural, support, cost, and risk impacts are significant enough to justify a formal project because the technologies you roll out are very likely to live a long life. Even if you've already invested time and energy into vendor due diligence, invest the time to create an objective view of the requirements, technologies and design alternatives before making the final decision to move forward.
About the author: John Merryman is services director for recovery services at GlassHouse Technologies Inc., where he's responsible for service design and delivery worldwide. Merryman often serves as a subject matter expert in data protection, technology risk and information management-related matters, including speaking engagements and publications in leading industry forums.
Do you have comments on this tip? Let us know.
Please let others know how useful this tip was via the rating scale below. Do you know a helpful backup tip, timesaver or workaround? Email the editors if you'd like to write tips for SearchDataBackup.com.