jules - Fotolia
Intuitively, the question is, "What are SSDs doing in backup and recovery plans?" Solid-state storage is fast, but still expensive compared with hard drives. We have to look under the hood of both backup and DR to figure this out.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Today, 1 TB SSDs are very inexpensive, retailing from distributors at around $200. This is still considerably higher than backup/archiving workhorse hard drives, which deliver 4 TB or more for the same price.
SSDs get interesting when we look at the backup window, which for many sites is just too long, and getting longer as storage grows. We are moving data to the cloud, too, using gateway appliances, but the WAN performance is fixed and limited, which implies a good buffer is a necessary part of the design.
This is where those large SSDs become very useful for backup and recovery plans. Data can be journaled on the gateway using a pair of SSDs in a mirror configuration for data integrity. The large SSDs allow many parallel backup streams, limited only by the LAN connection's capacity. This shortens the backup window significantly.
We also need to compress data and remove duplicate files before sending them over the WAN to the cloud. Because SSDs have high bandwidth, they allow a background job to reduce file size before transmission. This saves space in the cloud and frees up network traffic.
Large SSDs are coming down in price due to technology improvements like 3D fabrication, and, in 2017, they will reach a massive 100 TB/drive capacity, further optimizing backup and recovery plans. When that happens, the obvious choice for backup devices will move from HDDs to these fast, high-capacity drives, which will save power and space, as well as dramatically reduce the number of storage appliances needed for a given capacity.
Having fast storage for both local backup units and backup cloud storage will reduce recovery times for individual files. Most backup software caches recent data locally for several days, based on the observation that more than 70% of access to backed up data occurs in the first few days after the backup. This makes local files rapidly available, while the cloud files can be searched much more quickly for any file, especially if the recovery app runs in the cloud, too.
As we move in the next few years to much faster primary storage in the form of NVDIMM or Gen-Z storage class memory, SSD-based backup and recovery plans will be essential for matching your primary storage's performance.
How flash storage can improve backup performance
Video: Comparing flash and hard drive storage
Flash changes the game for replication and disaster recovery
Dig Deeper on Disk-based backup
Related Q&A from Jim O'Reilly
Cloud bursting is one way to manage spikes in demand, but it's difficult to achieve with certain apps. So which application types burst best to the ...continue reading
GPU instances help enterprises run more compute-intensive workloads on the public cloud. But what kinds of apps, specifically, are a good fit for ...continue reading
Choosing a cloud instance is not an easy task, and picking the wrong size can cost you in the long run. What are some red flags that suggest I need a...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.