Article

Storage Decisions: Diligent, IBM execs talk shop

Beth Pariseau

NEW YORK -- SearchStorage.com sat down at the Storage Decisions conference this week with Diligent Technologies Corp. CEO Doron Kempel and Cindy Grossman, IBM vice president for tape and archive storage. Among the topics of discussion: the latest release for the Diligent ProtecTIER VTL, the roadmap following IBM's acquisition of Diligent, and the next frontier in backup and data management. 

So how have things been since the IBM acquisition?

Kempel: Great, we have just released version 2.1 three weeks ago, which is our clustered release of ProtecTIER. The delay to our roadmap as the result of the acquisition was about two-and-a-half months. We had to go through our blue wash, which is making sure the product complies with IBM regulations and procedures, adding the logo, etc. Product development, engineering and QA are all behind us now, and all the kids feel at home.

Our roadmap has not changed.We just came out with our clustered release, and replication is next in 2009. The HyperFactor deduplication engine will also be released with mainframe virtual tape, which will be the first enterprise mainframe virtual tape with deduplication. It will also accept data from open systems, making it the first hybrid VTL with deduplication. That will come in the second half of 2009.

Requires Free Membership to View

More on data deduplication
CommVault CEO says business good despite earnings slip

Quantum disk revenues double, tape sales decline

IBM quickly integrates FilesX's CDP into Tivoli Storage Manager

Record sales reported for data deduplication products

Is the clustering a high-availability pair or is it N-way clustering?

Kempel: The first qualification is for a high-availability pair. The challenge when you dedupe inline is that the index is stored in memory. Clusters require two indices to be synced in separate server memory capacities so that if node two is seeing data that's redundant to node one, it's not saving it. Each node needs to keep the other aware of what has passed through. It was a two-year process to develop.

Any plans for N-way clustering?

Kempel: The customer will have to convince us why they need a third node at the cost of delaying replication support, slated for early 2009.

What else is on the docket?

Kempel: Right now the product is a gateway, and you can choose the disk. We no longer offer software only, except through our existing channel partner HDS [Hitachi Data Systems]. Next year there will be a new appliance made up of IBM servers and disk with a service contract for all the components.

Isn't that what HDS did earlier this year?

Kempel: HDS bundled hardware and software together in high-end and midrange configurations themselves. We only delivered the software.

What about the other VTL product IBM sells from FalconStor? What's going to happen to that?

Grossman:: We'll continue to sell that product. It offers back-end tape integration, System i support and compression. It doesn't have data deduplication [IBM does not use FalconStor's deduplication], and we will sell it to smaller clients. We really don't see a conflict between the two products. We'll move in the direction of System i support with ProtecTIER, but cluster, replication and mainframe support are a higher priority. It won't necessarily come in 2009.

Backup has been a hot topic of discussion at this show. It seems users are still struggling with data growth – some in the industry are saying data dedupe will not be enough to keep backups under control, and that data management capabilities need to improve. What's your take on that?

Kempel: Dedupe solves the problem backup creates, which is redundancy. Whether or not you can go back to the point of origin and not backup things twice remains to be seen [excluding source-based dedupe products]. Once you address a problem in IT reasonably well, the focus shifts elsewhere. Most problems are not resolved to the point of 100% cure. New projects and companies could emerge that solve the problem completely, but deduplication and the decline in the cost of disk make it a reasonable solution.

Applications do need to become smarter to refine backup, but I question whether or not that's still funded, or if virtual tape is good enough as it has been in the mainframe world, and the effort has shifted to archive. Backup probably has reached the point where you can't justify significantly more funds [for data management development].

 


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: