Preston: Backing up data, and lots of it, presents a challengeDate: Jan 16, 2014
In a time of multi-terabyte drives and demand for data to always be available, concerns about the length of time needed to restore data -- and the amount of data that is involved -- are at the top of any IT pro's to-do list.
In this Storage Decisions presentation, W. Curtis Preston, the founder of Truth in IT and Backup Central, outlines some of the issues with backing up data and data restores as the need grows to store increasing amounts of data.
"The problem with data growth -- this is exponential -- if you could fold a piece of paper in half 42 times, the stack would reach the moon… and that's what happens when you start to have 100% data growth every year," said Preston.
While larger storage devices can mean fewer devices that are higher in density, use less power and result in fewer device failures, that doesn't mean problems with backing up data have been eliminated. Preston noted that the undetectable bit error rate in data written to drives hasn't changed.
"Undetectable bit error rate is something that people don't spend a lot of time talking about. But this is the odds that a piece of magnetic media writes a 1 when you told it to write a 0," he said.
Usually, there's one error for every 100 TB in an enterprise SATA [Serial Advanced Technology Attachment] drive, and one error in every 10 TB in a consumer-grade drive, he said.
He also noted that with larger drives, drive rebuild times take longer, which increases overall risk of data loss. That is especially true if the system is storing too much data on one server, too many files in one file system or even [running] too many virtual machines, he said.
"During that time, if it's a RAID 5 array, you're at risk of total data loss. So, RAID 6 gives you some protection, because now at least we have a second drive that can take over if that drive fails," he said, but noted it could still happen.