What you will learn from this tip: How continuous data protection works (and doesn't work). Plus: A few products to check out, if you're in the market for CDP. (This tip is part of our Storage 101 tip series.)
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Continuous data protection (CDP) software is a relatively new genre of backup technology. With all newer technologies, questions about its merits and pitfalls are sure to abound. Stemming from a recent webcast storage analyst Jerome Wendt gave on CDP, here is what is puzzling storage managers about CDP.
Does CDP handle system volumes?
Jerome Wendt: This varies by CDP product. As a rule of thumb, network-based products are volume- and data-agnostic so it is safe to assume products such as Revivio Continuous will handle system volumes as long as they are part of a mirrored volume set where the Revivio presented LUN is part of that mirrored set. Host-based products you need to take on a case-by-case basis. But, both Mendocino Software's Realtime and Storactive's LiveBackup are examples of host-based CDP products that support system volumes on Windows but not on Unix.
When is the I/O complete status passed to write?
Wendt: This will depend on whether the approach is in-band or out-of-band, and whether the writes are occurring synchronously or asynchronously. For host-based products, the write acknowledgement occurs when the write hits the secondary storage (the CDP management server). For network-based products, a write acknowledgement is returned from both the primary storage and the CDP appliance. The question of I/O complete status matters for performance, synchronous mirrors and if there is any question of data being lost. In the first case, network-based CDP appliances tend to be as fast as primary storage since writes are cached on most CDP appliances before being written to disk. Host-based CDP solutions like Storactive and Mendocino run in asynchronous mode so the issue is performance impact, not data loss. In regards to data loss, even in the event of a catastrophic appliance failure of either the network-based appliance or the management appliance with which the host-based CDP agents communicate, there is no data loss on the primary storage.
Does CDP allow multiple updates to blocks in memory before media writes?
Wendt: Network-based providers like Revivio developed specific algorithms to reduce physical I/O for this purpose. Host-based providers also do but buffering these writes delays the writes themselves.
Does CDP support contingency (affinity) groups?
Wendt: Most second generation network or host-based CDP products do support them but they may not necessarily go by these names. Mendocino refers to them as "contexts" and defines them as volume sets across which write order fidelity is maintained. Revivio calls them affinity groups and uses the term to describe a set of LUNs that are all instantly recreated with data from exactly the same point in time.
In host-based CDP, for data to be replicated off site, does the CDP system need to be taken off line or can it continue to function and be replicated at the same time?
Wendt: In nearly every case, the CDP product can continue to function and replicate at the same time. Mendocino software performs data collection for the protected server while at the same time asynchronously replicating data to a remote site over IP. It uses two separate and independent processes within the management server to perform these tasks.
How does CDP manage flow control for peak load?
Wendt: Host-based products typically allow the CDP send queue to back up and be queued to local storage without interruption and they currently lack any method to respond to peak write I/O periods. Also, if the local staging resources for these I/O's is depleted, CDP stops. Revivio claims to allow users to provision as much CDP resources as needed for peak periods and gives administrators the flexibility to expand that capability when peak loads increase over time. It also has QoS provisions that ensure resources are devoted to processing incoming writes as a priority over other processes.
What are "side files"? Is this just a facility for "break" of process to allow backup of files, LUNs to tape, etc and resync to continue?
Wendt: "Side files" go by different names from the different CDP vendors. Revivio calls them TimeImages while Mendocino Software refers to them as simply snapshots. Regardless of what the CDP vendor calls them, most support them and cite their ability to create them as one of their primary value adds. Taking a snapshot with Mendocino allows administrators to present them to another server. When this snapshot is presented to another server, the snapshot is neither attached to the protected server nor does the other server access it through the management appliance. From these snapshots, backups can be run without affecting the protected server's data in any way. This feature does not work the same on all CDP products. For example, on Storactive's Liveserv, for example, when this "break" occurs, it will halt CDP and force a resync on restart due to the tight coupling the exists between the Liveserv and Exchange.
How is time synched across multiple servers' access to multiple storage subsystems to assure updates are sequenced properly?
Wendt: Revivio finds that host based CDP solutions can not synchronize time to a granularity sufficient for a cross server solution. Revivio believes this can only be done by an network based appliance solution like theirs. They can manage time to the microsecond across all initiators which can allow you to run an application set that runs across multiple servers. Despite Revivio's claims, Mendocino Software, which support the host based Realtime CDP solution, plans to introduce this functionality sometime in 2005.
If any block is being sent via IP to the central server that could mean a bottleneck in the system especially in a heavy I/O applications. Is that correct?
Wendt: Depending on the product selected, this could be true. Host-based products such as Storactive's LiveBackup rely on the assumption that I/O's are sporadic and will not impact performance. Mendocino Software finds when writes to the primary storage and CDP management server occur synchronously primary applications can be slowed since both writes must complete before an acknowledgement is returned to the protected server. Second generation host based CDP products from companies such as Topio and Revivio cache the writes to local disks before transmitting them to the central server to minimize the dual write penalty.
What is the overhead associated with the agent installation on each production host?
Wendt: Most of the vendors in this space report an average of 2-3% overhead. In reality, read intensive applications will consume much less than that while write intensive applications will likely see a greater overhead.
Hi, does CDP support Netware 6.5 file server?
Wendt: I am not aware of any host-based CDP product that supports Netware 6.5. In theory, however, second-generation network-based CDP products could conceivably be used with a SAN attached Novell server. Novell 6.5 offers storage services that support the ability to create a software mirror (RAID-1). By presenting a LUN from a CDP product and mirroring that with a same size or smaller LUN presented to a Novell file server, you could achieve this. Network based CDP products such as Alacritus Software's Chronospan and Revivio's Continuous could be used in this configuration. Revivio reports that their CPS-1200 is a block-node device and is OS agnostic. Both companies are not aware of any of their clients using their technology with Novell 6.5. However, they do have customers using it in conjunction with a myriad of older technologies such as MUMPS, Pick and Informix.
If my management server is backed up by Tivoli Storage Management (TSM) software, does the management server have to be fully restored before any server can be recovered, or a restore started for disaster recovery?
Wendt: In short, the answer is yes. I would recommend only configuring TSM to protect the system and application files, not the data store itself. Since the data store is generated by the CDP application, this store will change constantly and relying on TSM could actually leave you exposed in the event of a disaster. You would be better served to set up a secondary CDP server, ideally offsite, and asynchronously replicate the CDP's data store to that server. This way, you could restore the CDP server OS and application first using TSM, then recover the data from that secondary site.
Do these CDP benefits and drawbacks apply to all environments/platforms, including mainframe?
Wendt: Yes, these same benefits apply to all environments. I am not aware of any CDP product that supports the mainframe.
Can second generation CDPs support point in time recovery for single files?
Wendt: Yes they can. Second generation host-based CDPs from companies such as Mendocino Software's Realtime can be file sytem-based to support this feature. First-generation host-based CDP products such as XOsoft's Enterprise Data Rewinder and Storactive's LiveBackup also support this type of functionality.
For more information:
Tech Report: CDP
Product Roundup: Backup software
About the author: Mark Lewis is the managing editor of SearchStorage.com.