There are also concerns about data loss due to hash collisions. How have manufacturers addressed tha

I've had a chance to speak to a number of vendors about this, and for the most part they say that this is pretty much...

I've had a chance to speak to a number of vendors about this, and for the most part they say that this is pretty much resolved with the newest hashing algorithms. I was talking to Permabit about this subject, and it said that the only way you can be absolutely sure that each chunk of data is completely unique is to take each new chunk of data and compare it to every other chunk of data stored. It also said that's really not practical because if you do that, it creates a huge time delay. So, the compromise is to use a hashing algorithm.

Other companies take a different approach. ExaGrid, for example, cuts data into large segments and analyze contents of each segment to see how they are related to each other, and then it performs byte-level differencing on each segment and stores data that way.

Check out the entire Data Deduplication FAQ.

This was first published in December 2007

Dig Deeper on Data reduction and deduplication

PRO+

Content

Find more PRO+ content and other member only offers, here.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

  • Tintri VMstore T5000

    Like all of its VM-aware storage systems, Tintri’s first all-flash array -- the Tintri VMstore T5000 -- allows admins to bypass ...

  • SolidFire SF9605

    The high-capacity SolidFire SF9605 uses SolidFire’s Element OS 8 (Oxygen) to deliver new enterprise features such as synchronous ...

  • HPE 3PAR StoreServ 20850

    HPE 3PAR StoreServ 20850 holds 1,024 solid-state drives (SSDs). Hewlett Packard Enterprise claims it can deliver more than three ...

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchStorage

SearchITChannel

Close