Maksim Samasiuk - Fotolia
- Jason Buffington, Enterprise Strategy Group
Not too long ago, the gold standard for protecting organizational data involved using a disk-to-disk-to-tape process. First, a copy of production data went to secondary disk to expedite rapid recovery if needed, and then the data went to tape for long-term retention. Previously, some organizations used only tape, and a few moved to using only disk when that became cost-effective. Even then, most IT groups knew it made more sense to use a combination of the two media types to leverage what they did best: disk for recovery, tape for retention.
Fast forward a few years and the gold standard for data protection methods boiled down to employing backups, snapshotting and replication, specifically the following:
- Backups to provide multiple previous versions over an extended timespan;
- Snapshotting to deliver the fastest recovery from a near-current version; and
- Replication for data survivability at an alternate location.
Some have argued that one of these data protection methods should usurp the others depending on your point of view. But the fact is the best approach has always been to use each process for what it does best, in a complementary manner with the others.
Here we are today, and it appears the gold standard for protecting data, or at least the de facto standard at enterprise-scale organizations, has changed once again. It now centers on using multiple data protection methods per workload, namely employing the following:
- Something virtualization-specific;
- Plus a method that's database-specific; and
- A general-purpose backup process for "everything else."
Importantly, since most of those "everything else" backup products do not protect data generated from software-as-a-service applications such as Office 365, Google Docs or Salesforce, most enterprises end up using four different types of data protection products.
Ramifications of fragmentation
Do we really need four backup products? No, but because the approach to protection is already so fragmented, many IT operations admins -- and even senior IT decision makers -- are often no longer able to persuade their vAdmin and database admin colleagues that a unified product would really work better.
This situation is really the fault of vendors -- specifically, the unified-product vendors who haven't invested enough in marketing awareness of the economic benefits and technical proofs regarding how their products support varied workloads as well as niche offerings. Arguably, this lack of effective promotion is more of a high-impact factor in the lack of unified data protection uptake than any lack of engineering to actually deliver equitable protection capabilities. Until vendors fix this messaging problem, today's data protection-related fragmentation will continue.
In the meantime, having each administrator perform their own backups for the technological areas under their domain is a dangerous practice. Think about it: Most workload or platform admins only really care about being able to achieve 30-, 60- or 90-day rollbacks, for example. They are not worried about 5-year, 7-year or 10-year retention rules.
Corporate data must be protected to a corporate standard, however, which can include adhering to long-term retention and deletion requirements. That's a consideration regardless of how fragmented the actual execution of protection. So, right now, some organizational data is being under-protected and some overprotected. This situation of varying data protection methods is making organizations vulnerable.
A multiplicity of gold standards
Another thing to note is that gold standards do not necessarily replace one another. Many organizations may be using two, three or four backup products per workload. But they are still supplementing those backups with snapshots and replicas. That's actually a wise move, as no backup offering can replace the agility that comes with snapshotting or replication.
They're also still using disk for rapid restoration and tape for long-term retention. And many are now adding cloud-based protection (disaster recovery as a service, for example) to achieve added agility.
At the end of the day, what these admins and the organizations they work for should care most about is the agility and reliability of the protection effort -- regardless of the various mechanisms and media used to facilitate that protection.
We are going to have heterogeneous protection media, and we are going to have multiple data protection methods. With those realities in mind, to avoid unnecessary risk, the answer might be to have as close to a common catalog, control layer (for policy management) and console as possible. That way everyone will understand what is really going on across an environment via a single pane of glass, regardless of fragmentation behind the scenes.
About the author:
Jason Buffington is a senior analyst at Enterprise Strategy Group. He focuses primarily on data protection, as well as Windows Server infrastructure, management and virtualization. He blogs at CentralizedBackup.com and tweets as @Jbuff.
Make data protection smarter again
Choose the right method for data protection
The ideal data protection technique for object storage