Three approaches to remote replication

Remote replication came to the forefront in the wake of Hurricane Katrina in 2005, when storage managers realized how important it was to send vulnerable data offsite.

It's possible to recover from hardware failures, data corruption and even malicious activity by copying data to another disk. But what about natural disasters or even terrorist attacks?

In 2005, Hurricane Katrina underscored the vulnerability of local data and the importance of sending data offsite. Remote data replication duplicates data between remote servers or storage platforms across a wide area network (WAN).

Remote replication has been around for years, but advances in technology have made it mainstream. "Today I can buy replication for just about any application that I may have in my environment, and I don't have to rob a bank to do that," says Arun Taneja, consulting analyst and founder of the Taneja Group in Hopkinton, Mass. This article will take a look at the elements of remote replication, its pitfalls and its impact on the storage organization.

Remote Replication Information

Tech Report: Protect remote-office data

Take full advantage of the remote replication process

Stars align for remote replication

Narrow down your replication or CDP options

 

Making remote replication work

There are three approaches to remote data replication: host-based, array-based and fabric-based. A host-based architecture uses software running on a server or dedicated appliance to pass data across a WAN to a target system. One example is the KBX5000 Data Protection Platform from Kashya Inc. (now part of EMC Corp.), which connects directly into the storage area network (SAN). Host-based replication is usually the least expensive of the three approaches, but doesn't always offer the same performance.

Remote replication can also be accomplished between compatible storage arrays using application software accompanying the arrays themselves. This array-based approach used to be inflexible because the replication depended on identical arrays (e.g., Symmetrix to Symmetrix), but this is changing as replication software supports greater heterogeneity. For example, EMC's Clariion AX150 array ships with EMC SAN Copy software that can create remote point-in-time data copies between Clariion, Symmetrix, IBM, Sun and Hitachi Data Systems storage arrays.

Recently, remote data replication has started appearing in the network fabric, usually as software running on switches installed in the SAN. Topio Inc's Data Protection Suite is an example of fabric-based remote replication. Kashya and FalconStor Software Inc. also offer similar products. Replication in the fabric is particularly appealing because switches support a broad range of devices and there is no significant performance impact. "Obviously, intelligent switches can be brought to bear," Taneja says. "But many of these examples didn't even use intelligent switches yet."

The growth of remote replication is attributed to the convergence of lower disk costs, lower bandwidth costs and the emergence of bandwidth optimization techniques. Data deduplication is one element of bandwidth optimization, along with compression, delta differential and improved data flow methods. "It's also about optimizing the data flow -- creating larger packets so that when data does get transmitted, it's transmitted more efficiently, says Greg Schulz, founder and senior analyst at the StorageIO Group in Stillwater, Minn.

Synchronous and asynchronous remote data replication

Remote data replication can be accomplished synchronously or asynchronously. Synchronous data replication occurs in realtime where data is sent from the source disk all the way to a destination disk before the data transfer is acknowledged, e.g., the remote disk must "catch up" to the local disk. This ensures synchronization but latency in the acknowledgement limits synchronous distances to a nearby floor or building on campus. WAN interruptions can also wreak havoc with synchronous replication schemes.

With asynchronous data replication, data from one disk is passed only to a local server before an acknowledgement is received. The local disk will then pass data across the WAN to the remote disk as time and bandwidth allow. In many cases, the replicated disk content will lag behind the local data, by as much as several hours. However, asynchronous behavior works well over long distances (because latency isn't a factor) and inexpensive low-bandwidth WAN links. Asynchronous replication is also tolerant of WAN disruptions, maintaining a local copy of data until WAN service is restored.

Synchronous and asynchronous techniques can be employed together. For example, synchronous replication may serve a role in local backups, while asynchronous replication may duplicate the data to a distant disaster site. Still, remote replication should never be the sole means of data protection. "Both technologies can replicate viruses or other forms of corruption," says Heidi Biggar, analyst at the Enterprise Strategy Group in Milford, Mass. "Because of this, end users may want to couple replication with some type of snapshot -- continuous data protection (CDP) or near-CDP -- technology."

Replication management and tools

Tools for remote data replication are gaining features to work with other network management tools. The trend is clear: Simplify tools and automate management processes by improving integration with other backup and recovery products. Look for a gradual convergence of storage, network and application tasks. Analysts expect that users will eventually manage remote replication products through a single management interface that also embraces disk-to-disk (D2D), CDP and other technologies. "CommVault [Systems Inc.], Symantec [Corp.] and EMC have also been very vocal lately about their plans to provide integrated recovery focused platforms," Biggar says.

Roadblocks to remote replication

Although remote replication is more affordable and robust than ever before, serious deployment issues exist. Cost is the first concern. While the hardware and bandwidth are considerably less expensive than years past, organizations must still carry the cost of a second site along with the recurring costs of bandwidth. But with data volumes constantly spiraling upward, even with the benefit of bandwidth optimization there is always more data to move in less time. "Any gains that are made are zapped right away by consumption," Schulz says.

Many enterprise applications, such as databases and customer relationship management (CRM) software, frequently operate across multiple volumes. As a consequence, administrators must replicate each related volume to ensure a complete environment for the application. Not only does this demand more storage, taking more time and bandwidth for a complete replication, it also requires administrators to track the replication consistency across all of those volumes. Replication tools like Kashya's KBX5000 are designed to help ensure application data consistency.

Since so many products include remote replication software today, administrators must often juggle multiple replication platforms as more storage platforms are brought online -- a management nightmare for even the most skilled IT staff. Analysts note that fabric-based replication products can eliminate the growing glut of replication applets.

Multiple management applets can also have an adverse impact on remote replication performance, and simultaneous replication can choke the available WAN bandwidth. This invariably causes all of the replication tasks to fall behind and can be a serious problem for mission-critical replication tasks. Select remote replication products that can throttle their bandwidth utilization based on task priority. For example, the main corporate database may take priority and use most of the available bandwidth, while other data replication may only use a small part of the bandwidth, until the high-priority task is finished.

Finally, analysts highlight the importance of bandwidth optimization and encourage the selection of remote replication products that include strong features, like delta differential or data deduplication. For example, it is far more efficient to transfer only the changed bytes of a block, or the entire block, rather than to retransmit the entire file. "Not every replication product in the industry has that level of sophistication right now and the IT users need to be looking for that massive efficiency difference," Taneja says.

Go to the next page of this article for the impact and future direction of remote replication

 

This was first published in September 2006

Dig deeper on Remote data protection

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchStorage

SearchITChannel

Close