michelangelus - Fotolia
Secondary storage is a new technology market that has evolved to capture both data backup storage and storage used for test/development environments. The idea behind a secondary storage system is to put to good use the data in backup systems -- especially disk-based ones -- that would otherwise be sitting, spinning and doing very little. Test and development environments typically use some version of production data to validate code changes and new software releases. This data sits on standard storage and can contain many images or snapshots of production data.
So how should you choose a provider for your secondary storage system? Here are five areas to consider.
Ask yourself these questions:
- How do I get my data into a secondary storage system? Typically, the process is conducted through the normal backup cycle.
- Does the secondary device support all of my existing backup needs? There's no point in duplicating existing backup processes; the secondary product should replace backup entirely (or as much as practically possible) and, thus, meet all of the needs of the backup already in place.
- Can the secondary system ingest historical backups? This is an area that has perhaps not yet been addressed. As customers cut over to secondary storage, legacy backup still has to be maintained, including tracking when and where backups were taken.
How is the secondary storage platform used for data restore? Is the system as flexible on data recovery as the existing platform? Data recovery performance is critical for restoring normal service after data loss. A secondary storage system should restore data quickly and not be impacted by delivering other services at the same time. Restore capabilities include being able to run virtual machines (VMs) from the secondary data storage platform for a short period of time without impacting existing backups.
A secondary storage system has to deliver backup performance and ensure test/development environments receive the expected I/O level. Balancing the two is affected not just by the hardware in use, but also by features like efficient snapshots and data indexing. The secondary system should be able to take hundreds or thousands of snapshots and have the ability to restore to any snapshot image, all without a performance impact.
What long-term archive options are there? Long-term retention isn't cost effective on disk platforms, so offloading to tape, an object store or the public cloud is an attractive option. This means tiering data as the volume of stored information increases.
Search and reporting is an essential feature. VMs are more transient than physical servers and come and go over time. It's therefore important to be able to search VMs using more than just the name of the virtual machine -- using metadata, for example. Indexing also needs to easily show VMs that no longer exist in the primary infrastructure.
Storage vendors expand into the hyper-converged secondary market
Video: Object storage uses include secondary storage
Cohesity seeks secondary storage convergence