Like it or not, the chances are good that your data will end up on magnetic tape.
Looking at shipments of Linear Tape-Open (LTO) media cartridges, tape backup systems are continuing to gain ground, even in this era of NAND flash arrays and NVM Express architectures. Recent news from the LTO Consortium proclaimed that approximately 76,000 petabytes (PB) of compressed data was stored to tape in 2015 -- a 17.5% increase over the previous year -- and that more than 385,000 PB of capacity has shipped since the LTO tape cartridge was introduced in 2000.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
According to Peter Faulhaber, president of Fujifilm Recording Media USA, LTO-7 continues to generate substantial interest worldwide and is helping to break records in terms of tape capacities sold. More importantly, perhaps, the users of tape have become even more diverse.
Fujifilm has hosted an annual IT Executive Summit for the past seven years, and the audience has grown from stalwarts of industry and commerce to include "the industrial farmers" of the cloud era.
"All the major cloud service providers have been attending the events to learn more about how tape can be useful in their cloud data centers," Faulhaber said.
Analysts and industry watchers, including Fred Moore, president of Horison Information Strategies, have noted a shift from the use of tape as a data backup modality toward a more archival storage role where the extremely low cost of tape-based storage, combined with its generous capacity and durability characteristics, stands out. But the recent adoption of the Linear Tape File System (LTFS) format specification by the International Organization for Standardization and International Electrotechnical Commission (ISO/IEC 20919:2016) in June 2016 may stand such projections about tape backup systems on their ear.
Don't blame tape for your problems
If you ask a firm that has dumped its tape backup for some other data protection scheme to explain its rationale, the response usually has to do with the complexity or inefficiency of backup software. Creating a backup required the instrumentation of the entire application and storage infrastructure with agents, scheduling data copy operations, provisioning networks to handle backup data streams and the precise operation of the tape storage kit. At the end of the process, manual steps were needed to take written tapes out of the tape system, box them up and ship them to off-site storage.
The majority of problems with tape backup systems were caused by so-called "carbon robots," insiders claimed, referring to errors by human operators. Backup software was installed or operated incorrectly, too complicated for the typical operator to deploy or often misconfigured. In truth, the whole backup concept seemed burdensome:
- Take block, file and object data from its primary storage;
- Encapsulate it into a proprietary software container that could only be read and written by the backup software used;
- Write it to linear media -- many cartridges for a very large backup data set; and
- Read and convert the backup data into native file and object system formats if needed.
To many, it just seemed like a lot of work.
Rarely was tape technology actually to blame for problems when restoring data from a tape backup system, despite what some disk vendors claimed in the late 1990s. If a hardware or media-related issue did arise, its root cause was not with tape media, drives or robots, but with users. If backups could not be restored, the chances were good it was because someone used damaged tapes -- cartridges that had been dropped or otherwise mishandled -- to record backup data in the first place, or data written to tape was corrupted by drives that had not been properly cleaned and maintained.
Regardless of the causes of tape-based backup problems, or that most complaints could be traced back to shoddy backup software, tape technology got the rap. That, combined with the falling price of high-capacity disk drives and the relative simplicity of copying data from one disk to another, saw tape falling out of favor in the late 1990s.
But about a decade and a half later, three factors are causing firms to re-evaluate their decision to move away from tape backup systems.
Big data equals an invitation to tape
The sheer amount of data being produced in this era of the internet of things and big data analytics is the first factor. With tens of zettabytes of new data requiring storage in clouds by 2020, leading cloud vendors are looking at tape to provide the capacity they need at a cost that won't wreck the cloud model. Tape looks very inviting both for archival data and backups.
Customers are increasingly using tape to copy and store archival and backup data so they can transport it more cost effectively to a cloud service provider, a process known as cloud seeding. Many firms have discovered that transporting large quantities of data across contemporary networks is slow, expensive and prone to errors. Placing the data on transportable media is much more convenient and less costly.
The advent of cloud services has not relegated tape backup to the dustbin. In fact, the dynamics of cloud storage have underscored the value of tape for backup and archive.
The write stuff
The second factor promoting the comeback of tape backup is write speed. Data transfer rates for tape are significantly faster than disk and on par with flash drive storage, with native write rates in excess of 300 MBps. Yes, NAND flash is undergoing a technology refresh as we speak, but industry insiders point out that even as chip technology advances, write performance gets worse. For the price, tape is competitive and well suited to the task of containerizing large quantities of backup or archival data. And writing backup data to tape at 300 MBps works out to be much more cost-efficient than trying to push terabytes of data across an expensive wide area network or metropolitan area network interconnect. For example, a 1 GBps pipe, such as OC-192 or 10G SONET transport, will require approximately two and a half hours to move 10 TB of data within 80 kilometers.
At current and anticipated write speeds, backup data writes can be done efficiently. For those observers concerned with backups increasing the latency of production storage while they are occurring, there is always flash-to-tape, disk-to-disk-to-tape (D2D2T) or other data buffering strategies that can mask the tape write operation.
LTFS for the win
Finally, there is LTFS, which could radically rebalance the backup world in favor of tape. With LTFS, tape stores files and objects without the need for backup software. The native file or object system used to record the data is replicated completely to the tape media, rather like using a USB drive to record files and directories from a disk drive or solid-state drive. There are no longer any proprietary containers needed for backups, which eliminates both the time required to format data for a container and the time required to read data back out of the container when it is needed for a restore.
LTFS takes a lot of the backup and restore latency out of traditional data backup and makes tape backup very similar to disk-to-disk mirroring, but without the hardware "identicality" requirements that are often associated with disk mirroring strategies. Very good implementations of LTFS technology are being made by various vendors, spanning cognitive global namespaces that link storage sources to LTFS tape -- whether on premises or in the cloud -- to appliances to simplify the implementation of D2D2T and disk-to-disk-to-cloud strategies. Companies can drop this technology into their environments and use it with virtually any kind of storage infrastructure or application workload.
Tape backup's hills to climb
There are still some hurdles to be surmounted if tape backup systems are to enjoy the kind of resurgence we've seen in tape-based archiving.
- The voices arguing to shelter data in place -- just power down the drives where older, rarely accessed data is written -- need to be answered with facts about the vulnerability of their so-called frictionless data protection strategy. Trustworthy statistics are needed to confirm the failure rates of disk drives -- and solid-state drives -- that are powered down for several months and then restarted. Such data will likely squelch the arguments for such an ill-starred data protection concept that is being heard in big data circles these days.
- New IT operatives need to be educated about tape. At a recent event on data protection, a young virtualization administrator approached me and stated that he had never heard of tape technology, which sounded to him like an approach that would provide a lot of value in his shop. He wanted to know where he could find out more, and also why one needed to build a room full of books to stand up a tape system -- a misunderstanding of the term tape library that was not part of his vocabulary.
Whether you implement tape in your own shop or use a cloud service for data protection and backup, the chances of your backup going to tape are increasing.
Video: Why tape backup could be your best option
How tape and cloud can work together for backup
Cut costs with tape for backup and archiving