Data Domain, along with others companies such as ExaGrid Systems Inc. and Quantum Corp., are riding the wave of rapid adoption of data deduplication systems as the solution to unmanageable data growth. However, there have been questions raised about how these solutions will affect recovery speed.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Unlike on traditional tape, deduplicated data isn't all stored sequentially and, in fact, it could be spread out all over the disk. So that begs the question: Are you going to see longer recovery windows as a result of using a data deduplication system?
The answer, unfortunately, is it depends. If you're using two-year-old tape technology for your backup and recovery efforts today, then it's safe to say that in almost every circumstance, you'll see an improvement in recovery performance. Even if your tape technology is fairly modern you should see improvements, especially when restoring individual files and folders.
If you have a device that can deliver data in a recovery mode (this would have to be high-speed disk with a well-tuned virtual tape library), your storage and network infrastructure must be able to deliver that data to its intended target at those speeds. However, this is usually not the case. There's almost always a performance bottleneck that must be overcome or lived with.
It is likely that the bottleneck is going to occur on the network. The deduplicated disk will outperform a Gigabit Ethernet network segment. Even with a recovery to locally attached disk, the write performance of that target disk is a major bottleneck. Disk writes are slower than disk reads, and those writes are going to be coupled with RAID parity recalculation.
Slow or inefficient clean-up routines can also slow recovery time. These routines limit fragmentation, clean up orphaned data segments and perform data integrity checks. All of this requires processing and disk I/O power and can slow down the recovery process. Clean-up process inefficiencies are caused by slow, poor programming of these routines. They also result in clean-up times that are so slow that the routine doesn't finish prior to the next inbound backup job or these processes are running while a recovery attempt is made.
Unlike a tape drive that sits idle until the next backup job, these cleanup routines are almost always working. The important consideration is that these routines typically are active at off-backup times, typically the middle of the business day, also the time when most recovery requests will be made.
Speeding up data recovery performance
Recovery performance can also be greatly impacted when the system has to handle concurrent operations, even outside of the clean-up process, for example, if you're trying to recover data while a backup is still going on. If tape is going to stay in your data protection plan, then one of the most important scenarios to test is recovery performance from the data deduplication system while you're doing a transfer to tape.
It is also important to test your recovery performance while the data deduplication is in a failed state. For example, a drive failure is simple to simulate -- pull a drive and let the rebuild process begin. While that's going on, start a large recovery and see how that impacts performance.
Not all systems suffer under these situations and it is possible to get recovery performance that's equal or close to equal the backup performance. As is always the case, you need to establish your goals for recovery. Then, make sure that the infrastructure and the applications are tuned to be able to handle that data transfer. This may involve upgrading, fine-tuning your network or changing the way your data is laid out on your storage device.
About the author: George Crump, founder of Storage Switzerland, is an independent storage analyst with over 25 years of experience in the storage industry.