Compression and other data reduction techniques can help to bring down the cost of data backups, while also enabling backups to complete quicker. Here are some best practices for compressed backups that will improve your data management and storage strategy.
First, use hardware compression whenever possible, but be aware of the dependency that it causes.
For those who might not be familiar with the compression process, it works by replacing repetitive strings of data with a token identifier. If, for example, the string 12345 appeared several times within a file, each occurrence might be replaced with a single character, thereby reducing the file's footprint on the backup target.
Because of the way that compression works, data must be read and analyzed before it can be compressed. This is a computationally intensive process. Using hardware compression offloads this process from your backup server -- thus freeing up CPU resources -- and enables it to be handled by the backup target. The caveat to compressed backups, especially in the case of tape drives, is that, if you were to replace an aging drive with a drive made by another manufacturer, it may not know how to rehydrate the compressed data during a restore operation.
A second best practice for compressed backups is to take stock of your data before committing to using compression. Compression is only beneficial if the data contains redundancy. Some types of files are already compressed, which means that compression cannot further reduce the data footprint. This is especially true for some media types, such as JPEG and MPEG files, and is also true for compressed archives, such as zip files.
Finally, for optimal compressed backups, don't rule out the use of software compression. Software compression causes data to be compressed before it is sent to the backup target. This is especially helpful when the backup target resides in a remote location and a limited amount of bandwidth is available for servicing the backup. Compressing the data at the software level can reduce the amount of data that needs to be transmitted to the backup target, thereby reducing bandwidth consumption and decreasing the amount of time that the backup takes to complete.
Dig Deeper on Data reduction and deduplication
Related Q&A from Brien Posey
Like composable infrastructure, next-gen hyper-convergence promises to ease procurement and management by, among other things, enabling users to add ... Continue Reading
The reasons for going hyper-converged vary. Often, however, organizations deploy HCI technology to effectively address one or more of the five issues... Continue Reading
Adhering to service-level agreements, keeping up with performance demands and planning for future workloads are just a few of the goals you should ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.