News Stay informed about the latest enterprise technology news and product updates.

Storage Decisions: Diligent, IBM execs talk shop

Diligent CEO Doron Kempel and IBM VP Cindy Grossman talk to SearchStorage about data dedupe VTLs, IBM's 2009 roadmap, and what's in store for backup and data management.

NEW YORK -- sat down at the Storage Decisions conference this week with Diligent Technologies Corp. CEO Doron Kempel and Cindy Grossman, IBM vice president for tape and archive storage. Among the topics of discussion: the latest release for the Diligent ProtecTIER VTL, the roadmap following IBM's acquisition of Diligent, and the next frontier in backup and data management. 

So how have things been since the IBM acquisition?

Kempel: Great, we have just released version 2.1 three weeks ago, which is our clustered release of ProtecTIER. The delay to our roadmap as the result of the acquisition was about two-and-a-half months. We had to go through our blue wash, which is making sure the product complies with IBM regulations and procedures, adding the logo, etc. Product development, engineering and QA are all behind us now, and all the kids feel at home.

Our roadmap has not changed.We just came out with our clustered release, and replication is next in 2009. The HyperFactor deduplication engine will also be released with mainframe virtual tape, which will be the first enterprise mainframe virtual tape with deduplication. It will also accept data from open systems, making it the first hybrid VTL with deduplication. That will come in the second half of 2009.

More on data deduplication
CommVault CEO says business good despite earnings slip

Quantum disk revenues double, tape sales decline

IBM quickly integrates FilesX's CDP into Tivoli Storage Manager

Record sales reported for data deduplication products

Is the clustering a high-availability pair or is it N-way clustering?

Kempel: The first qualification is for a high-availability pair. The challenge when you dedupe inline is that the index is stored in memory. Clusters require two indices to be synced in separate server memory capacities so that if node two is seeing data that's redundant to node one, it's not saving it. Each node needs to keep the other aware of what has passed through. It was a two-year process to develop.

Any plans for N-way clustering?

Kempel: The customer will have to convince us why they need a third node at the cost of delaying replication support, slated for early 2009.

What else is on the docket?

Kempel: Right now the product is a gateway, and you can choose the disk. We no longer offer software only, except through our existing channel partner HDS [Hitachi Data Systems]. Next year there will be a new appliance made up of IBM servers and disk with a service contract for all the components.

Isn't that what HDS did earlier this year?

Kempel: HDS bundled hardware and software together in high-end and midrange configurations themselves. We only delivered the software.

What about the other VTL product IBM sells from FalconStor? What's going to happen to that?

Grossman:: We'll continue to sell that product. It offers back-end tape integration, System i support and compression. It doesn't have data deduplication [IBM does not use FalconStor's deduplication], and we will sell it to smaller clients. We really don't see a conflict between the two products. We'll move in the direction of System i support with ProtecTIER, but cluster, replication and mainframe support are a higher priority. It won't necessarily come in 2009.

Backup has been a hot topic of discussion at this show. It seems users are still struggling with data growth – some in the industry are saying data dedupe will not be enough to keep backups under control, and that data management capabilities need to improve. What's your take on that?

Kempel: Dedupe solves the problem backup creates, which is redundancy. Whether or not you can go back to the point of origin and not backup things twice remains to be seen [excluding source-based dedupe products]. Once you address a problem in IT reasonably well, the focus shifts elsewhere. Most problems are not resolved to the point of 100% cure. New projects and companies could emerge that solve the problem completely, but deduplication and the decline in the cost of disk make it a reasonable solution.

Applications do need to become smarter to refine backup, but I question whether or not that's still funded, or if virtual tape is good enough as it has been in the mainframe world, and the effort has shifted to archive. Backup probably has reached the point where you can't justify significantly more funds [for data management development].


Dig Deeper on Data reduction and deduplication

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.