Data Domain Inc. is bringing out its biggest and fastest data deduplication device, while EMC is also set to launch its first three virtual tape libraries (VTLs) with data deduplication.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Data Domain today rolled out the DD690, a quad-core system that it claims can deliver up to 1.4 TB per hour aggregate throughput and holds 35.3 TB of usable data.
"This is the next generation of our product line," said Beth White, Data Domain vice president of marketing. "Last year we moved to dual-core processors, and this year we're following that CPU-centric trend in how we scale our systems by moving to quad core."
EMC Corp. isn't making anything public, but sources said it will unveil an enterprise and two midrange VTLs using data deduplication software from Quantum Corp. -- probably next week at its annual EMC World conference in Las Vegas.
The new deduplication products come as major storage vendors rally around the hot data deduplication space. IBM purchased Diligent Technologies for a reported $200 million last month, and it's no secret that EMC has licensed Quantum's data deduplicatione technology for its disk libraries.
EMC's new dedupe family
According to sources and EMC documents, EMC's new dedupe VTLs are the EMC DL3D 1500, DL3D 3000 and DL3D 4000. The DL3D 1500 and DL3D 3000 are midrange libraries with NAS or VTL interfaces and compete with Data Domain, as well as Quantum's DXi disk-backup platform. Data Domain also offers NAS and VTL options, while Quantum offers both interfaces, as well as iSCSI.
The DL3D 4000 is an enterprise VTL that EMC will position as a competitor to VTLs from IBM/Diligent and Sepaton.
The DL3D 1500 model scales to 36 TB of useable capacity with a throughput up to 720 GBps and includes 6 Gigabit Ethernet ports and two SAN ports. The DL3D 3000 scales to 148 TB of usable capacity, performs at 1.4 TB per hour and has 6 Gigabit Ethernet and 4 Fibre Channel ports. Both the midrange models will ship standard with CIFS/NFS interfaces and offer VTL as an option.
The DL3D 4000 has an ingestion rate of 8 TB per hour, dedupes and replicates at 1.4 TB per hour and has up to 148 TB of usable capacity for data deduplication and a total of 822 TB. It is available with up to 8 Fibre Channel ports and is VTL only.
EMC hasn't confirmed the new VTLs, but its customers, competitors and other industry sources said EMC sales representatives have already been pitching the products. "I have the paperwork on my desk," said one EMC storage customer evaluating dedupe devices from EMC and Data Domain. "I haven't signed off yet, but I expect to make a decision very soon."
Data Domain looks to press advantage
Data Domain, which claims more than 1,800 customers, continues to upgrade its systems in the face of increasing competition. Besides more throughput and larger capacity than its previous system, Data Domain's DD690 includes the option for 10-Gigabit Ethernet connectivity and supports replication fan-in from 60 Data Domain DD120 branch office devices.
Data Domain bills the DD690 as a "long-term online retention" system that fits in its strategy of moving beyond backup to nearline storage. The DD690 is also available as a gateway option and will be generally available next month. Pricing begins at $210,000 for 16 TBs.
Brian Biles, Data Domain vice president of product marketing, said his company's systems have an advantage over data deduplication newcomers because Data Domain built the technology into its systems from the start.
"People are taking existing products and trying to add dedupe to them, and they have weaknesses," Biles said.
Data Domain's biggest perceived weakness is that it can't deduplicate across storage nodes, limiting its scalability. A fully configured DD690 scales up to 568 TB, but that requires 16 controllers. Biles said Data Domain will support clustering across nodes next year. IDC analyst Noemi Greyzdorf said lack of clustering doesn't hurt Data Domain, yet.
"From a marketing and competitive standpoint it's a weakness, but from a deployment and architecture perspective, it's not so much," Greyzdorf said. "The only problem is when you have a single volume bigger than the system can handle, you can't strip it across multiple systems. Volumes that size aren't that common yet, though."