Same thing with the assumption of tape failure. If one in 100 tapes fail, then every tape you use has a 99% chance...
of not failing.
Here's the general math argument that a lot of people want to use (and we hear it a lot, but that doesn't make it a good argument). The bad argument is that a certain percent of tapes are guaranteed to go bad (risk) per year. And that based upon the number of tapes being used (N number of tapes per full backup, multiplied by X number of tape sets, we'll name this variable "group"), the more tapes in the backup set, the higher the risk. Or in other words:
Risk (1%) * Group (12 tapes) = 12% Risk
That's not the case, because the odds are based on each individual tape. So if you have a 1% risk (one out of 100 tapes is going to go bad), and you are using 12 tapes, then each tape faces the same risk (one out of 100) versus it being a multiple of the group.
What you get with multiple tapes potentially going bad versus one tape going bad, is that you face increased chances per tape. In other words, it's like flipping that coin 12 times. Each time you flip, you have a one in 100 chance of failing. So if you flip once (one tape), you had one chance of failure. If you flip 12 times (12 tapes) you have 12 chances of one in 100 times failure rate.
Read Pierre Dorion's answer to this question.
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.