This article can also be found in the Premium Editorial Download "Storage magazine: Goodbye, old backup app."
Download it now to read this article plus other related content.
Disk is great for backups and speedy recoveries, and should play a key role in a DR plan, but tape is still the best choice for data protection and retention.
The calendar may read 2013, but I still hear IT professionals and vendors assert that disk-based storage is always best when it comes to protecting business data. To me, that's more of a "1999" mentality.
Admittedly, from a performance perspective, disk will usually be your best bet. You can compress and deduplicate data more efficiently to disk, and it's certainly a faster data-restoration medium when you're trying to re-establish part of your environment from a backup. In general, disk is superior to tape- or cloud-based storage as the first tier of recovery.
But when it comes to long-term retention -- adhering to a seven-, 10- or 25-year data preservation policy -- using disk alone is nearly always impractical.
Tape's tarnished past
So why has tape been reduced to lingering in the shadow of disk as a viable long-term data retention tier? In part, it may be because data deduplication became incredibly popular and dedupe is a disk-centric process.
But it was mainly because every five to seven years, tape vendors revised their tape formats and form factors. Over my career, I've used 4mm DAT, 8mm, DLT and Linear Tape-Open (LTO), just to name a few. While vendors tried to incorporate backward compatibility with the new formats, it didn't stop IT folks from viewing tape as a rather old-fashioned, hard-to-manage medium with reliability issues.
Imagine finding out that the data you just stored is now on an obsolete tape format, and regulations mandate keeping the information for another 20 years. You have two options: hang onto the old tapes and the compatible tape libraries required to read them for two more decades, or transfer all the data to the newer tape format. The second option is not only a major undertaking, but it might make it harder to prove that the original data is intact and unaltered after the migration.
LTO sets tape standards
But there's good news. The biggest "anti-tape" arguments are more or less invalid now due to the following:
- Most tape vendors have standardized on LTO, an open-format cartridge. LTO tape libraries not only work with the medium's current iteration, LTO-6, they read data on LTO-5, -4 and -3 tapes. It's a consistent retention medium, so organizations no longer need to maintain old tape libraries or migrate data to keep it readable.
- Believe it or not, today's tape cartridges with LTO have a lower mean time between failures than individual hard drives. The claim that today's tapes are failure-prone compared with disk is simply outdated and untrue. Granted, most storage operations don't write data to a single spindle; they write it to an array engineered for fault tolerance across spindles. But the point is that data written to tape is secure from both a reliability and encryption perspective.
Tape has re-emerged from disk's shadow, serving as yet another example of the pattern of constant change in IT. Tape is now longer lived, more reliable and faster than it was in the old days.
Still, that doesn't mean tape has overtaken disk as the go-to medium for disaster recovery and short-term backup. Disk remains the best medium to recover from using a modern disk-to-disk backup solution or snapshot mechanism.
Retention is a different story, where no single offering fits every use case. After all, the phrase "long-term retention" is relative. For three years of retention, disk might be fine. For 10 years, tape is likely the better choice. (Note: The cloud is another good option but any savvy cloud provider holding your data for 10-plus years is using a large-scale, highly economical tape farm to do it.)
Some practical advice for IT
Most busy IT teams are trying to achieve multiple data protection and retention goals in parallel. To support all those simultaneous efforts, it's a good idea for the team to determine which combination(s) of disk, tape and software will work best for their individual business.
You should start by pinpointing exactly what and how you need to recover. For example, if you have to recover data within seconds, you should implement snapshots. If there's a need to recover data across long distances, replication should be incorporated. When there's a need to restore from a range of previous versions of data, backup is a good choice. And if you need to recover data generated 10 years ago, you need an archive … a tape archive.
Current research from Enterprise Strategy Group shows that (regrettably) organizations tend to use tape more for backup than for archiving. However, "best practice" data protection and archiving requires an architecture that balances the performance benefits of disk for backup/fast restore with the cost and reliability benefits of tape for long-term retention. You want a strategic, diverse set of media and mechanisms protecting your data.
That doesn't mean deploying disjointed offerings from multiple vendors. A few providers now offer tape, disk, snapshot and replication products that have been cohesively converged to support common data protection goals for big data centers and small remote offices alike. It's even possible (a challenge, but possible) to corral the management of it all under one umbrella.
If you're trying to define your organization's data protection strategy, the worst thing you can do is base the definition on the capabilities/limitations of the systems and software you have on the floor. Instead, think strategically and objectively about how you need to recover data and accomplish the other data protection activities you need to deal with. Then choose the right technologies -- which may include disk and tape -- to help you get there.
About the author:
Jason Buffington is a senior analyst at Enterprise Strategy Group. He focuses primarily on data protection, as well as Windows Server infrastructure, management and virtualization. He blogs at CentralizedBackup.comand tweets as @Jbuff.
This was first published in September 2013