Cybrain - Fotolia
Backup is no longer simply copying data from production storage to protection storage. Today, the bleeding edge of backup is about meeting the demands of the "always-on" data center. For these shops, backup and recovery windows are a thing of the past. Data must be protected in real time and recoveries must be nearly instantaneous.
Also, many organizations are pushing to keep more data for longer periods of time -- in some cases, forever. Just a few years ago, meeting these types of demands seemed impossible, but now, data protection technologies and backup applications are emerging to address the challenges.
Backups in real time
"It's not about backup, it's about recovery" is a phrase you'll hear spouted by the marketers at backup software vendors. But the reality is that if a valid backup copy is not made, recovery is impossible. This is not a chicken-and-egg discussion, backup must come first. In the modern data center, those backups must happen more frequently, so that if a disaster occurs, only a small amount of data needs to be restored. Vendors have presented multiple ways to accomplish this mission.
Legacy backup products copied all or most of the data when a protection event was scheduled. Even a so-called "incremental" backup would copy far more data than was actually changed. This is because the backup had no granularity below the file itself.
But today's backup software has the ability to understand data at a much more granular level, typically at the block level. This means that when a database is protected, only the individual blocks that were changed since the previous backup need to be transferred.
Known as "block-level incremental" backup, this process significantly reduces the amount of data transferred to protection storage. It also means faster backups.
While block-level incremental backup has been available for years, it is just now becoming a trusted feature. This is because application vendors like Oracle and operating environment vendors like VMware are providing API sets that allow block-level incremental backup of applications or virtual machines. Using this technology, backups can be performed as frequently as every 15 minutes.
Sub-15 minute backup windows
Data protection vendors are not stopping at a 15-minute protection window, either, and many are integrating replication into their applications. Replication allows for an almost real-time capture of production data, typically stored on a high-performance (and high-cost) storage system, rather than protection storage. Also, only the most recent copy of data is typically stored. But today, backup and replication have been integrated and backup software can now back up replicated copies automatically. This means that a single process, managed via backup software, can provide a near-instant recovery by pointing the application to the secondary storage system.
Today's backup applications are also becoming more closely aligned with production storage. Some can schedule and execute production storage snapshots directly from the backup application. This allows a single application to be the hub for all protection activities. Also, some applications can include these snapshots in their search results. For example, if a user requests file "xyz.doc," the search now can include both snapshot and backup versions, and results are ranked based on the copy that is fastest to restore.
Snapshot and replication technologies can also be used together with effective results. For example, a backup application can be used to trigger a snapshot, replicate locally (to a second system) and to a disaster recovery (DR) site (to a third system). The result is full protection and rapid recovery both from a local server failure or an entire data center -- controlled from a single application.
No backup server backups
In some cases, integration with production storage is eliminating the backup server altogether. This allows applications like Oracle and MS-SQL to be backed up directly to protection storage, or by allowing the primary storage system to back up directly to the backup device via a private path.
In this model, the backup process is often delegated to application owners. This provides greater scale (from a human perspective) and better protection, since the application owner probably knows best what data should be protected and how often. It also allows the backup administrator to focus on protecting other parts of the data center like user endpoints and home directories.
The direct transfer from primary storage to backup storage via a private path eliminates not only the backup server, but also the backup network, because the transfer is direct from the application to the backup appliance. This creates a very high performance transfer. Vendors in this space are developing application integration so that applications like Exchange, MS-SQL and Oracle can be safely protected.
The protection of endpoints has also evolved over the last few years. Many providers can now protect and recover from a wide variety of devices, beyond just laptops. For example, a user can recover a file that was on their laptop onto their tablet for editing or sharing, to some extent, eliminating the need for a file sync-and-share product.
Also, many endpoint backup products have added the ability to perform remote wipe operations. This makes them a good compliment to bring-your-own-device (BYOD) initiatives popular within some companies. If a laptop or tablet is lost or stolen, the corporate data on that laptop can be erased the moment it makes a connection to the internet.
While using replication and recovering on a secondary storage system meets a very strict recovery point and recovery time objective (RPO/RTO), it's also an expensive option. Another alternative is to use recovery-in-place. Data protection software that provides this capability uses block-level incremental backup and virtualization to instantiate a virtual server directly on protection storage. Block-level incremental backup events do not typically occur as often as replication events, so the RPO/RTO window is a bit longer than with replication.
Instead of recovering in less than 15 minutes, these technologies typically meet a sub-1 hour RPO/RTO, which is adequate for the majority of the applications in the environment. By accepting that additional recovery time, the organization saves money. It can use the protection storage that it already has in place without needing to buy a production-quality secondary system.
The challenge with this approach is the performance of the recovered application while its data is being hosted by the backup appliance. In most cases, the appliance will not be able to provide the same level of performance as the primary storage system. However, many appliances should be able to provide adequate performance. It is important to understand what type of performance the appliance will deliver.
Backup applications are also evolving to provide many functions that used to only be found in traditional archive applications. Backup applications now include databases that are highly scalable, able to track data for years, as well as provide full and fast search functionality. As long as the archive need does not also require a compliance type of lock-down capability, these applications may be suitable for many small to medium-sized businesses.
Many of these applications have an enhanced search functionality that allows for context-level searching. In other words, the data within files can be indexed and searched in a "Google-like" fashion.
Bleeding-edge protection storage
The result of the combined capabilities of block-level backup and in-place recovery will also lead to advancements in the data protection hardware itself. These systems have to evolve from a backup appliance to protection storage. This hardware needs to simultaneously ingest data from a variety of sources and also provide the performance and reliability to occasionally host a production application if recovery in place is used.
Over the next few years, expect these systems to integrate flash, as has happened in production storage. Initially, the flash will be used to store metadata so that search and deduplication indexes can scale while being more responsive. But eventually, the SSD area will be used as the target for in-place recoveries to ensure adequate application performance.
Several protection storage systems have also expanded into the archive market and can consolidate backup and archive data on a single system. This is being driven partly by backup applications that offer archive functionality, but also by new archive applications designed to use disk as an archive target. These storage systems are expanding even further so that they can support data replication and even replace basic NAS functionality like user home directories. These storage systems typically leverage a scale-out architecture so they can cost effectively meet the performance and capacity demands of these new tasks. Scale-out architectures are then a gateway to a hyper-converged secondary storage that not only can store secondary data; it can also host the data protection and replication applications themselves. This capability should greatly reduce costs, simplify management and increase performance.
Backup has been a popular cloud storage use case. This typically entails an on-site backup appliance which can replicate data to a cloud provider. These appliances can also be used as stand-in servers to provide a similar recovery-in-place experience.
The on-premises software is evolving in two areas. First, its backup performance is becoming faster. Also, internet bandwidth is also becoming very cost-effective. Most businesses can easily afford a 1 gigabyte connection to the Internet now, and faster speeds are well within reach. As a result, there is less of a need to store this data on-site with an appliance.
Some vendors are offering a direct-to-cloud option not using any appliance, which simplifies implementation and keeps costs down. Others are using an extender/cache approach more than an appliance. This extender or cache only stores a small subset of the data on-site, essentially acting as a cache to the cloud. These vendors will also allow a mission-critical subset of data to be permanently stored on the extender and in the cloud, while less mission-critical data can be stored only in the cloud. This mission-critical subset is then available for fast recoveries locally and the less mission-critical data is available for recovery via the cloud. For the most part, data centers should look to keep large critical files like databases on the extender and small files in the cloud, where the latency of the cloud won't severely impact recovery performance.
Another area of evolution is recovery via disaster recovery as a service (DRaaS). While no longer bleeding edge, it is becoming more trusted. This capability allows a provider to eliminate one of the major concerns about cloud-based protection: the time it takes to pull all the data out of the cloud. Instead, an organization's servers are recovered fully in the cloud within less than an hour.
Bleeding-edge backup applications do far more than copy data from point A to point B. Many of them look more like a data protection suite that can meet all but the strictest of RPO/RTOs and provide management over a primary storage system's own data protection capabilities. The value of this suite-like approach is that backup, DR and even archive can be managed from a single pane of glass, which should lower administration time and hardware acquisition costs.
BIO: George Crump is president of Storage Switzerland, an IT analyst firm focused on storage and virtualization.
A closer look at today's backup apps
Enterprise backup application Quality Awards