Looking for something else?
To figure out the true capabilities and limitations of current disk-to-disk-to-tape strategies, we have to examine actual implementations of products and talk to the folks in the trenches about what works and what doesn't. Recently, I had the opportunity to do just that, following a talk I was giving to a few IT folks in Atlanta, G.A. Among the attendees was Jerry Porter, vice president of production for the e-mail marketing company, Silverpop, which is headquartered in the city.
Silverpop is one of the few successful application service providers still standing from a few years ago when ASPs were all the rage. The firm aids clients in using e-mail marketing in a legal and effective way -- that is, in a manner that respects client sensibilities and supports customer relationship development and management.
Bottom line: clients come to Silverpop to manage their e-mail lists, to implement e-mail contact programs, and to collect information on the results of mailings. Porter's job is to ensure that everything goes smoothly and that the greatest possible value is delivered to the client. Silverpop wouldn't stay in business very long if it lost the data that customers have entrusted to the service provider.
So, Porter rightly regards data protection as a critical component of his business. With about a half terabyte of active client-provided data at any given time, including graphics, video and text for mailings and mail lists themselves, and another 3.5 TB (and growing) of company data and client data archives, he says that he decided recently to deploy a Fibre Channel fabric. He selected EMC Clariion arrays and a 32-port Brocade switch in part to accommodate future storage growth, but also to connect all storage to an existing ADIC tape library.
Silverpop was using disk-to-tape for years as its primary data protection method. Concerns about the efficacy of this strategy began to mount, however, as the volume of data to be backed up began to grow and LAN-based backups began taking more and more time. Time-to-data is one metric that clients are using to hold their service provider's feet to the fire.
Says Porter, "Our service contracts with our clients required that we be able restore data to a useable form within an extremely short period of time following any interruption. We were concerned about the amount of time required to find and restore a specific file if, say, a mailing list was corrupted or some other mishap occurred."
To address the concern, Porter reports that Silverpop has deployed an additional tier of Clariion arrays in its FC fabric -- essentially, to serve as a tape surrogate. The arrays comprising this secondary disk tier, he says, do not use Fibre Channel drives as in the case of the primary storage tier, but instead leverages less expensive Serial ATA (SATA) disk.
"We've used disk as a target for tape backup in the past, using the support for disk targets in Veritas NetBackup," Porter says, "but the fabric enabled us to establish an off-LAN multi-tier storage scenario that works efficiently."
Porter says that damaged or deleted files can often be restored immediately to the production environment from the Tier 2 disk layer ... much faster than they can be restored from tape. But, even with this disk buffer, tape continues to play a role. Tape backups between Tier 2 disk and his ADIC library can be scheduled and executed as a separate process that doesn't impact production processing at all: a win-win from an operational standpoint.
Porter's positive experience with D2D2T hinges on the fact that he did not simply go and buy an advertised solution and cross his fingers that it would work. He reports that at least one vendor besides EMC was considered for his FC fabric. He found the competitive hardware from Hitachi Data Systems to be every bit as capable as the EMC platform. The ultimate decision came down to support and bid price.
The rollout of the solution went exactly on schedule and on budget. The only challenges were related to schedules and consulting resource availability. Testing everything in advance proved to be an effective way to grease the skids.
"We benefited from the fact that we were building a whole new hardware/solution deployment, rather than adding on to existing infrastructure," Porter observes. "We implemented test data stores, then tested the whole thing end to end, including recovery. Again, easy to do because the infrastructure was not in a production mode until testing was complete and conversation was undertaken."
He added that, even as we were chatting, his personnel were preparing to retrofit another of the company's production centers with an identical solution -- an endeavor that also faced the constraints of a limited maintenance schedule window for getting the job done.
Asked about solution performance, Porter admitted that he had no numbers to offer at this point. The environment where this solution is actively deployed is not yet seeing the volumes of traffic or workload necessary to generate usable/meaningful numbers. In two weeks, he promises, "when we bring up the environment currently being deployed, we will see meaningful performance numbers immediately." (He has agreed to update us when this happens.)
Porter says that his success is based as much on common sense as on any vendor's technology. His approach from the beginning "has been to implement new technology in an incremental manner."
For more information:
Backup School: Lesson 2 -- Which backup media is right for you?
About the author: Jon William Toigo has authored hundreds of articles on storage and technology along with his monthly SearchStorage.com "Toigo's Take on Storage" expert column and backup/recovery feature. He is also a frequent site contributor on the subjects of storage management, disaster recovery and enterprise storage. Toigo has authored a number of storage books, including Disaster recovery planning: Preparing for the unthinkable, 3/e. For detailed information on the nine parts of a full-fledged DR plan, see Jon's Web site.