cutimage - Fotolia
Why does copy data management matter? Because you need better data protection than you have and you can't afford to move forward with your current protection based on how the business is scaling.
Operationally, production storage is growing at approximately 40% annually, as is secondary protection storage. Sure, you get some deduplication benefits, but the first copy of unique production data still takes up nearly the same amount of space in the backup container. Meanwhile, data storage budgets within IT are growing at around 7% annually. So, you can't keep doing what you've been doing.
Technically, business dependence on IT and data access continues to grow, so nightly backups alone are never enough. Instead, most environments are combining backups, snapshots, replication, business continuity/disaster recovery (BC/DR), and sometimes archiving and availability technologies all under the noble banner of "protecting the business and being agile."
Of course, building an agile data protection infrastructure (hardware, software, services and talent) is expensive. Because of this, companies need to change the economics of managing their current data protection infrastructure (DPI). There are two options:
1. Increase the ROI of your DPI. What else can you do with those secondary copies of data? Analytics? Enable test/development without affecting production? Could other teams in the organization benefit from read-only access to a copy of the data? All of these business-enablement scenarios increase the overall ROI of the data protection infrastructure. While you might not have initially aspired to those scenarios, they can help to pay for the data protection your organization is asking for.
2. Reduce your DPI. This doesn't mean you should stop backups, quit snapshotting or halt replication. But do you need every disparate copy that all those processes create? Are snapshots taking up too much primary storage? How many copies do you need off-site? Are you keeping too much data, or just what's needed for long-term preservation and/or BC/DR?
Copy data management (CDM) aims to synthesize these two options. How do you reduce the number of copies managed by the multiple data protection methods, which are all legitimately aimed at helping IT to be agile? In some cases, CDM requires a complete rethinking of how data protection is accomplished. In others, CDM can be achieved by getting much smarter about how the myriad data protection tools are orchestrated. The latter approach requires integration between the backup, snapshot and replication engines and, more importantly, a rich and intelligent catalog to keep track of copies so that the broader goal of pruning redundant copies or partial copies can be achieved.
If you don't have a profound understanding of what is in each of the copies that you are making across these processes, then you're guaranteed to have too many copies (often of stuff that you don't need). In turn, your data protection infrastructure will be too expensive and likely still insufficient for the agility needs of the business. Some tools can be stitched together for CDM effectiveness; other cases require a complete reengineering of how data protection is done. It's time to get smarter about how you create and manage data copies, because you can't keep doing it the way you always have.
About the author:
Jason Buffington is a senior analyst at Enterprise Strategy Group. He focuses primarily on data protection, as well as Windows Server infrastructure, management and virtualization. He blogs at CentralizedBackup.com and tweets as @Jbuff.
- Software-Defined Storage for Backup and Recovery –Hedvig Inc
- Backup and Disaster Recovery: Telesales Guide –TechData - Microsoft
- Comprehensive Data Backup and Recovery –Commvault
- How to Buy Backup and Recovery: A Customer's Evaluation –Rubrik