In the data protection battle of erasure coding vs. RAID, the latter appears to be losing ground.
"Right now, with capacities growing the way they are, you could make an argument that RAID is starting to see the end of its usefulness in a lot of environments," said Scott Sinclair, senior analyst at Enterprise Strategy Group, at a Storage Decisions conference in New York. "In massive environments, we're starting to see the limits of what it can and can't do."
Many organizations now routinely store over a petabyte of data.
"Capacity growth is not slowing down," Sinclair said. "So we need a protection model that continues to grow as data grows."
That's where erasure coding comes in. In this method of data protection, data is broken into fragments, expanded and encoded with redundant data pieces, and stored across a set of different locations or storage media.
In terms of erasure coding vs. RAID, erasure codes are often used instead of traditional RAID because they can reduce the time and overhead required to reconstruct data. But erasure coding is processor-intensive, increasing latency. For high-performance and low-latency workloads, such as flash storage arrays, traditional RAID may be optimal for some time, according to Sinclair.
Erasure coding can be useful with large quantities of data and any applications or systems that need to tolerate failures, such as disk array systems, object stores and archival storage. Just about every object vendor right now is using erasure coding, Sinclair said.
"Traditional RAID still definitely has a place today in high-performance, smaller environments," he said. "But, honestly, if I'm looking at any sort of capacity, any sort of massive environment, I'm looking at something that can get beyond RAID 6, and something like an erasure-coded solution."
Replication is another method to throw into the mix. The technology features multiple full copies of each object, plus it offers better resilience and faster performance and recovery.
"What a number of solutions will do is they'll use replication for various small objects and erasure coding for very large objects," Sinclair said. "Or they may do replication for performance-intensive, small objects and erasure coding for large, less performance-intensive objects."
Watch the erasure coding vs. RAID video above and then read the transcript below to help guide your decision.
Transcript - Erasure coding vs. RAID: An analysis of data protection methods
Editor's note: The following is a transcript of a video clip from Scott Sinclair's presentation at Storage Decisions in New York City. The transcript has been edited for clarity.
Essentially, the idea with RAID -- redundant array of inexpensive disks or redundant array of independent disks -- is to protect against drive failures. With capacities growing the way they are, you could make an argument that RAID is starting to see the end of its usefulness in a lot of environments, but not everywhere. We've been talking about how tapes have been dead for 20 years. So [RAID] will probably exist for quite a while. But in massive environments, we're starting to see the limits of what it can and can't do.
When I deploy my RAID and set my parity [bits], all of them are on drives that are typically right next to each other or within the same enclosure. With erasure coding, you could have some parity bits in London, some in New York, some in San Francisco. So your protection scheme cannot only protect you against the drive failure, but against a rack failure and a site failure, and [it can] do this automatically.
So you think about a traditional RAID environment. [As an example,] I deploy my RAID group, I set up a replication policy and I replicate that to London. Now I have two copies and they're fully redundant -- double the capacity. With erasure coding, it depends on the type of codes you use, but it could be set up so you have some of the data in New York, some of it in London, some of it in San Francisco. And if New York goes down, you can rebuild it from the data that's in London and San Francisco.
One of the challenges of erasure coding is, if you have a failure on one site, you want to be able to handle it. Some organizations have identified their codes or are intelligent in how they have deployed codes, so you can recover from a disk failure with only [the] data on site. Replication can be an alternative. Some [organizations] use a little bit of both.
In addition to the issue of automatically distributed data taking longer to recover across the ocean, one of the other current gotchas with erasure coding vs. RAID is that erasure coding tends to be more processor-intensive and tends to impact performance more. You're deploying a more intelligent code, so you're taking more time, and you have to understand the data in order to move it. So there is a performance penalty to that.
And it typically works a lot better with larger objects. Erasure coding was designed for object storage, where use cases include big media images or medical archiving or seismic data.