BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
It is tempting to say that there is never a good reason to disable the dedupe process. However, there are two situations that may warrant disabling data deduplication features.
One such situation involves multi-tier deduplication, which refers to deduplicating the same data multiple times using different deduplication methods to achieve the highest possible dedupe ratio.
Global deduplication products are a good example of multi-tier deduplication. Let's say you need to back up 10 different servers running the same operating system. There is a lot of redundancy across these servers because they all have the same system files. A local, block-level dedupe process would eliminate redundancy within a server's file system. Without a second deduplication pass, however, the backup media would still contain a high degree of redundant data because the first deduplication pass only eliminates redundant data on a per-server basis. Cross-server redundancy still exists. A second deduplication pass at the backup target level would eliminate the remaining redundancy.
This type of multistep deduplication is perfectly acceptable. However, data deduplication has become so commonplace that you could have multiple products attempting to deduplicate the same data using an identical dedupe process. Performing a local, block-level deduplication on a per-server basis at the file system level and then again at the storage level would just increase overhead. At worst, it could result in data corruption.
Another possible reason to disable the dedupe process is that the associated overhead could impact system performance. If the deduplication process allows system performance to fall to an unacceptable level, then it is time to disable data deduplication.
Make the right dedupe process decisions
Survey: Deduplication process helping with backup
Choose the best dedupe backup system