olly - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Data backup strategy from a disaster recovery perspective

Brien Posey discusses data backup strategy for DR and the convergence of backup and disaster recovery technologies.

Data protection has evolved at such a rapid pace over the last few years that today's backup and disaster recovery systems barely resemble those from a decade ago. One of the ways this evolution has occurred is through the convergence of backup and disaster recovery.

On the surface, backup and disaster recovery might seem like the same thing, but there is a difference. Backups have historically involved making redundant copies of data. Disaster recovery, on the other hand, is all about how backups and other fault-tolerant mechanisms are put to work after a disaster occurs.

In order to understand and appreciate how these two concepts have merged within a single data backup strategy, it is necessary to consider the way that backups were performed in the past. For decades, backups involved using a scheduled process to copy data to removable media (such as tape). Removable media backups are still used today, but they have largely given way to continuous backups that copy data to a disk-based storage array on an ongoing basis. This type of backup is referred to as continuous data protection.

How CDP, tape-based backup strategies differ

Continuous data protection systems are different from legacy tape-based backups in many different ways. One such difference is that tape-based backups result in the accumulation of a large collection of backup media. Depending upon how the backup is made, the same data might get backed up over and over again.

In contrast, modern backup software that relies on continuous data protection may only retain a single copy of the data, which is updated as changes are made. These systems allow users to "roll back" data to a point in time before corruption or hardware failure occurred.

Of course, it is important to keep the backup server and the backup target (the storage array) from becoming a single point of failure. That being the case, many continuous data protection systems are also designed to replicate their contents to a secondary storage device. This might be another storage array, cloud storage or even a tape library.

One of the things that made continuous data protection so popular as a data backup strategy is the fact that it effectively dealt with the problems of data growth and shrinking backup windows. Continuous data protection systems completely eliminate the backup window. There is simply no need to schedule a backup, because backups occur periodically throughout the day.

Prior to continuous data protection systems, backup administrators often struggled to complete backups within the allotted backup window, but that wasn't the only time-related challenge. The recovery process after a disaster typically involved a long restoration process. This process was sometimes so long that administrators had to prioritize their data so that the most critical data could be restored first.

Snapshot, replication and virtualization have enabled backup, DR convergence

The convergence of backup and disaster recovery technologies has occurred largely because snapshot, replication and virtualization have made it possible to recover from a disaster without the need for a traditional data restoration. The methods involved in this convergence seek to minimize storage cost, while also allowing for instant recovery.

Every vendor takes a slightly different approach to achieving these goals, and the terminology used can also vary from one vendor to the next. Generally speaking, however, instant recovery and minimized storage costs are achieved through the use of differencing disks and snapshots.

A differencing disk is a virtual hard disk that has been reserved for a special purpose. A snapshot is a process that redirects write operations to a differencing disk. With these two concepts in mind, consider how instant recovery becomes possible.

The basic philosophy behind instant recovery is that a full-blown restoration is unnecessary if there is already a copy of the data available online. Rather than launching a traditional restore operation, the failed system can simply make use of the data that is already available on the backup storage array.

Of course, the problem with this approach as part of an effective backup strategy is that if you were to simply redirect a server in a way that allowed it to use your backup in place of its primary storage, the contents of the backup would soon be modified. This is where differencing disks and snapshots come into play.

Before the failed server is allowed to use the data residing on the backup array, a snapshot is created. The snapshot results in the creation of a differencing disk. The failed server can use the data from the backup array, but only for read operations. All write operations are directed to the differencing disk. This ensures that the backup remains in a pristine state.

In the meantime, the failed server's storage can be rebuilt and data can be replicated from the backup array to the failed server's newly rebuilt storage. Once the replication has been completed, the contents of the differencing disk are merged onto the server's storage and then operations are redirected from the backup storage array to the server's usual storage device.

The same concept that allows for instant recovery can also be used to create test/dev environments without actually having to create additional copies of the data. Snapshots are used to create differencing disks that are used solely in the lab environment. Some vendors refer to this as virtual lab technology. The nice thing about it is that it allows lab environments to be instantly created, without incurring the storage costs that would normally be required to store a full copy of the production data.

About the author:
Brien M. Posey, MCSE, has received Microsoft's MVP award for Exchange Server, Windows Server and Internet Information Server. Brien has served as CIO for a nationwide chain of hospitals and has been responsible for the Department of Information Management at Fort Knox. Visit Brien's personal website here.

Next Steps

Crump: Take a fresh look at your data backup strategy

Building a future data backup strategy

Buffington: Overcome data protection issues in an SMB backup strategy

This was last published in October 2014

Dig Deeper on Backup and recovery software

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

This makes a lot of sense. Why bother having separate, discrete processes for what is essentially the same thing? It's really just a matter of scale -- are you restoring a single file (or even a single record), or a whole lot of them? The more this can be done as a single process rather than siloes, the simpler it will be. 
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchStorage

SearchITChannel

Close