Article

Real-World DR

Marc Staimer
Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Tips for real-world disaster recovery planning."

Download it now to read this article plus other related content.

This article first appeared in "Storage" magazine. For more articles of this type, please visit www.storagemagazine.com.

Storage administrators often ask what their peers are doing to solve the same disaster

    Requires Free Membership to View

recovery (DR) problems they face. What they really want to know is what's working and what's cost-effective.

Information of this type is usually difficult to make public. Many IT organizations have a policy restricting employees from speaking to the press or others about these issues. Companies often don't want to bring attention to their problems, even if they've been resolved. To bring you as candid a report as possible, we've withheld some company names in this article. The pertinent facts remain unchanged, however.

Users are also typically tight-lipped about how much they actually paid for a DR solution. Therefore, the "cost to implement" numbers that follow are based on approximate MSRP and estimated operating expenses (OpEx) for the time invested, although all of the companies report that they paid considerably less than the sticker price.

The following case studies have a central theme: Increasing data levels and stricter compliance regulations are forcing companies to look to newer technologies to solve their growing DR and backup pains. The old DR paradigm of backing up to tape and driving the tapes offsite is broken. Companies are increasingly replicating data over high-speed WANs and using incremental backup technologies to meet their service level agreements (SLAs) and backup windows, and to protect their critical data less expensively. Of course, restores are much easier, too.

Compliance crisis

A multinational bank's storage requirements for its worldwide operations and hundreds of distributed locations have been growing at an increasing rate. This put intolerable pressure on its backup/restore systems. The firm's NetBackup and ARCserve systems, from Veritas Software Corp. and Computer Associates (CA) International Inc., respectively, were unable to meet their needs and problems were becoming progressively worse. Backup windows were missed more often than they were met. Simple restores of a single file were taking between five and 10 hours; SQL database restores were taking days to fully restore. The bank estimated that only approximately 70% of its data was protected and recoverable.

These issues led to other problems. Software patch windows were missed because of the time required for backups. Tapes were trucked offsite to a tape-vault service provider, but if a backup window was missed, incomplete backup tapes were shipped offsite.

In addition, backup data security was essentially non-existent. Tape data wasn't encrypted, so anyone could copy/load a tape and have access to highly confidential user information and records -- a serious liability as evidenced recently by another big bank's well-publicized loss of tapes containing 1.5 million user records.

It was obvious that the bank would be in serious trouble if it ever had to recover from a disaster. "This was a nightmare of epic proportions," said one bank executive. "And one that was not without its consequences." In an age of regulatory compliance, failure to protect the bank's data and recover it in an adequate period of time put the bank at risk of severe financial penalties. The situation had to be fixed quickly.

To correct its numerous problems, the bank first determined what it needed and then prioritized what to fix first. During the product research phase, storage administrators studied and evaluated products from all the top brand-name vendors and a few startup companies. After much scrutiny, the bank zeroed in on the only solution it felt met its requirements. Backup My Info! Inc., a backup service provider/VAR based in Florida and New York, recommended Televaulting backup-to-disk software from Asigra Inc., a 19-year-old Toronto software firm (see "Pay-as-you-go remote backup"). To ensure data would never be outside its control, the bank elected to license Televaulting from Backup My Info! instead of purchasing the backup/restore service.

Asigra's Televaulting has only two main backup/restore components, DS-System and DS-Client. The DS-System software is the centralized repository for backups. This is how the bank deployed it: The DS-System contains the entire set of compressed and encrypted generations of incremental-forever backups from each remote location or laptop. All of the backed up data is stored as content-addressable storage. This means all of the compressed and encrypted files and/or data can be restored with a high degree of granularity. Individual files, complete volumes, database tables, complete databases or even bare metal can be restored from any of the backup generations. DS-System software runs on standard (Linux, Windows, Solaris or VMware) servers without any special hardware.

The DS-Client software is installed at remote sites. This part of the application collects the data to be backed up at the remote location from all of the target application, file, mail and database servers, as well as any included desktop and laptop PCs. The DS-Client maintains only the latest version of each backup; restores of that generation can be made locally without having to access the DS-System.

Multiple DS-Clients can transmit to the same central DS-System. DS-Client backup targets don't have agents; DS-Client uses standard APIs and existing security credentials to remotely log into all backup targets to capture relevant application data and securely manage the transfer to the DS-System.

The DS-Client maintains all current sets of permissions and doesn't require turning the backup targets into shared or mapped drives. It transmits all the data from first-time backups in a compressed and encrypted format to the central DS-System over a TCP/IP connection. It applies AES or DES encryption to backup data in flight and at rest in the DS-System repository. All subsequent target backups eliminate redundant (or common) files, and backs up incrementally or only the changed blocks (delta blocking). The net effect is that the bandwidth required at each remote site is measurably reduced, which is an important cost consideration for any distributed backup/restore program.

Cost was another key factor for the bank. DS-Client licenses are free, while the DS-System is licensed on a "pay-as-you-grow" basis based on the compressed backup capacity stored at the central location and additional advanced features. Essentially, the software is licensed the same way disk is purchased. The bank paid $90,000 MSRP with an ongoing OpEx of $27,000.

Of course, there's no such thing as a perfect implementation. Shortly after installing DS-Client, the bank discovered there wasn't enough memory in the system to collect the data. Doubling the RAM solved the problem.

The bank has deployed Televaulting on the majority of its servers and most of its desktops. It plans to roll it out to its remaining desktops and laptops. The results to date:

  • Backup windows are no longer missed.
  • Bandwidth requirements and WAN costs have declined by as much as 80%.
  • Restores of individual files and SQL databases are completed in minutes.
  • IT resources have been freed up.
  • The reduction in DR costs has already paid for the solution.
  • DR compliance is no longer a worry.

    An e-ICP's DR problem

    Our second company is Broadview Networks Inc., a New York City-based electronically integrated communications provider (e-ICP). Broadview provides integrated communications solutions (including voice services, data services, dial-up and high-speed Internet services) to businesses in the northeastern and mid-Atlantic states.

    It was using EMC Corp.'s Symmetrix Remote Data Facility (SRDF)/Adaptive Copy over the EMC Gigabit Ethernet Director for DR replication between its two primary data centers. These data centers are separated by approximately 10 milliseconds of roundtrip latency. The general rule of thumb in converting circuit latency into distance is that a millisecond of latency equals approximately 100 miles.

    EMC and the e-ICP calculated they could meet the DR replication requirements with a 24 megabyte per second (MBps) fractional DS3 private virtual circuit (PVC).

    Unfortunately, they were wrong. Actual measurements showed a best-case effective data throughput of a miserly 17 MBps (and as low as 12 MBps). To make matters worse, replication requirements were increasing; Broadview wasn't meeting its backup windows and data wasn't being protected. The company determined it now needed effective data throughput of at least 28 MBps to meet its backup windows.

    Broadview would have to increase the bandwidth allocation to its EMC DR application to at least a whole DS3 and possibly part of another. Even then, there were no guarantees the additional bandwidth would fix the effective data throughput problem, and projected additional bandwidth operating costs were high. Broadview even considered replacing the entire EMC DR solution, but elected not to do that when it realized the throughput issue centered on TCP/IP's fickleness.

    EMC's solution was to bring in the HyperIP TCP storage replication accelerator from Network Executive (NetEx) Software Inc., Maple Grove, MN. The HyperIP software runs on a standard Lintel appliance provided by NetEx. HyperIP is usually deployed in matched pairs (although it can be deployed in a many-to-one configuration) and for critical DR, in an active-active fully redundant, highly available configuration. It can be set up as a simple TCP gateway or proxy. Broadview set it up as a gateway.

    HyperIP takes in TCP/IP packets from the application over a Gigabit Ethernet adapter and converts them to an efficient, alternative transport delivery mechanism between appliances. In doing so, it receives the optimized buffers from the local application and delivers them to the destination appliance for subsequent delivery to the remote application process. HyperIP is licensed on a "pay-as-you-grow" basis based on the amount of throttled bandwidth.

    HyperIP tracks the acknowledgements of data and resending buffers; its flow-control mechanism on each connection optimizes the performance of the connection to match available bandwidth and network capacity. Because it uses a more efficient transport protocol than TCP/IP, it dramatically lowers overhead. In addition, it dynamically adjusts window size from 2 KB to 256 KB, allowing optimal replication performance. The result is essentially zero TCP latency and considerable congestion avoidance. The entire HyperIP transport is completely transparent to the storage replication application.

    A key challenge for storage replication applications running over TCP/IP is packet loss. Bit errors, jitter, router buffer overflows and the occasional misbehaving node can all cause packet loss, which is devastating to effective data throughput. Most networks have some packet loss, ranging from .01% to as high as 5%. Packet loss causes the TCP transport to retransmit packets, slow down the transmission of packets from a given source and re-enter slow start mode each time a packet is lost. This error-recovery process causes effective throughput to drop to as low as 10% of the available bandwidth.

    HyperIP mitigates the effects of up to 5% packet loss by optimizing the blocks of data traversing the WAN, maintaining selective acknowledgements of the data buffers and resending only the buffers that didn't make it, not the whole frame. Packet loss for Broadview -- although nominally in the .01% range -- was having a negative impact on the EMC SRDF effective data throughput.

    There were multiple issues with Broadview's implementation. First, because latency was in constant variance, the HyperIP units had difficulty functioning correctly. Once the network settled down, an ATM router port failed. After that was corrected, the Symmetrix began having intermittent Gigabit Ethernet port time-out issues because its firmware wasn't up to date. Once the firmware was updated, the ATM router port was fixed and the network stabilized, things ran smoothly. The implementation cost was $120,000 MSRP; the ongoing OpEx is approximately $18,000.

    Broadview is thrilled with its HyperIP implementation. Its EMC SRDF/Adaptive Copy effective data throughput ranges between 60 MBps to 90 MBps on its 24 MBps PVC, averaging about 70 MBps. The plan is to aggregate other storage replication applications (such as Veritas Volume Replicator) through the HyperIP to take advantage of the additional "free" bandwidth.

    Virtualizing storage arrays

    For a large southern U.S. manufacturer, DR recently became a primary issue because of regulatory compliance. Before the regulations, the manufacturer thought of DR protection as nothing more than backing up to tape and shipping the tape offsite. But before it could implement a better DR system, it had to solve a difficult problem between its VMware servers and IBM Corp. ESS (Shark) storage systems.

    To get around the Shark's inability to dynamically allocate storage to a logical unit number (LUN), the manufacturer carved its Sharks into 4 GB LUNs. It then used the server's volume manager to aggregate LUNs as needed for applications. The workaround was successful until it was moved to a VMware-based environment.

    The problem arose when the storage requirements on the VMware servers increased to 2 TB. VMware is limited to 128 LUNs, which is more than sufficient in most circumstances. But the Shark workaround was now a roadblock: 128 LUNs multiplied by 4 GB per LUN equals a maximum of 512 GB -- barely one-quarter of the new requirement. The manufacturer could have carved the Shark up again, but that would have meant migrating all the data from the Shark, re-initializing and reformatting the Shark and then migrating the data back -- a process that would have been disruptive and time consuming. Another option would have been to rip out the VMware environment and to go back to one server image per platform, but that would have incurred dramatic disruptions and high costs.

    The company thought SAN fabric-based virtualization might solve its primary problem and, as a side benefit, provide a cost-effective DR solution as well. It evaluated solutions from DataCore Software Corp., FalconStor Software Inc., IBM and Troika Networks Inc. It settled on Troika's Accelera and SAN Volume Suite (SVS), which includes Troika VMware multipathing fabric agent, StoreAge Storage Virtualization Manager (SVM), multiMirror, multiCopy and remote mirroring over TCP/IP. The SVS is primarily deployed in active-active pairs with each one connected to its own Fibre Channel (FC) switch or director.

    The VMware multipathing fabric agent, part of SVS, resides on the Troika Accelera, not the VMware server. The StoreAge volume management, replication, snapshot, mirroring and data migration tools reside on the SVM appliance. The SVM appliance provides the virtualized LUN map to the fabric agent, which directs each VMware server's access to the physical LUNs. Replication over distance is done using TCP/IP from the SVM appliance.

    The company experienced only one significant setback, which occurred when it was implementing the high-availability (HA) option. The administrator pulled a series of FC cables to prompt a failover. Apparently, the sequence of cable pulls revealed a bug in the failover code, which Troika then patched.

    Another problem that occurred was attributed to operator error. The company had implemented StoreAge server agents on a number of server platforms, including Windows and Novell NetWare, before they were available as "fabric" agents. When new Novell NetWare servers were added, the agents weren't loaded on the servers. This let the Novell servers connect directly to physical LUNs (instead of the virtual LUNs) they shouldn't have had access to and data was corrupted. The error was quickly discovered and fixed, although it took much longer to correct the corrupted data. As a result, the company is looking to migrate all of its server agents to fabric-based agents to prevent similar errors. Implementation costs were approximately $120,000 MSRP for the HA configuration; ongoing OpEx is approximately $18,000.

    The Troika SVS system solves the VMware/Shark dilemma by presenting 1 TB virtual LUNs to the VMware servers. The company expects payback in as little as 12 to 18 months based on the savings vs. current costs for storage provisioning and offsite tape storage. The estimated costs don't include the savings from disk-based DR. The Troika SVS will reduce disk-based replication costs from approximately $90,000 per terabyte to $10,000 per terabyte by allowing the replicated data to reside on a lower cost disk system such as IBM's DS4000, Nexsan Technologies' ATAboy or EMC's Clariion.


    About the author: Marc Staimer is the president of Dragon Slayer Consulting.


  • There are Comments. Add yours.

     
    TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

    REGISTER or login:

    Forgot Password?
    By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
    Sort by: OldestNewest

    Forgot Password?

    No problem! Submit your e-mail address below. We'll send you an email containing your password.

    Your password has been sent to: