Sun has launched yet another set of countersuits against NetApp, this time in California. “Sun was legally obligated to respond in Texas to the initial suit brought on September 5, 2007 by Network Appliance to forestall competition from the free ZFS technology,” Sun said in a statement emailed to press this week. The statement continued:
Today we filed additional counterclaims in California, and specifically under the Lanham Act and California Business and Professions Code, based on Network Appliance’s false statements to the public about the alleged use of Network Appliance patents in ZFS. In parallel, we will be bringing a motion before the court in California asking that the case filed in Texas be consolidated with the case filed today for trial in the Bay Area, headquarters to both Sun and Network Appliance. Today’s filing includes counterclaims against the entirety of Network Appliance’s product line, including the entire NetApp Enterprise Fabric Attached Storage (FAS) products, V-series products using Data ONTAP software, and NearStore products, seeking both injunction and monetary damages.
Since Sun was forced to litigate, we feel California is a more appropriate venue to do so for several reasons. First, Sun and Network Appliance are both headquartered in Northern California, within 10 miles of each other. Second, most discovery will take place in California, as many of the key inventors on the patents and primary counsel for both parties are based in California. From both a judicial and economic standpoint, it makes much more sense for the case to be in California.
Sun has accused NetApp of “venue shopping” by choosing the Eastern District of Texas. A Sun statement responding to NetApp’s original suit called it “a legal jurisdiction (East Texas) long favored by ‘patent trolls.’” The choice of district has been the source of head-scratching even from people who are still reserving judgement, given that the two companies are both located in California, as Sun’s statement points out.
Who knows what the truth is. It could also be that since the district has a history of patent litigation, NetApp might have felt that court would be better able to discern the truth out of the he-said she-said better than a court with less experience in California.
But the longer this goes on (and boy, has it gone on), the more I start to think that even with the technical background I’ve picked up and the familiarity I have with both companies from covering storage for years, I’m not sure I would be able to sort out who’s right here. If this ever gets to trial, I do not envy the judge or potential jurors. Not one bit.
Of all the storage companies competing to sell data deduplication, Quantum is unique. That’s because it is primarily a tape vendor and data deduplication was developed to replace tape.
Look at some the other vendors involved in what Data Domain CEO Frank Slootman calls a “land grab” for deduplication customers. Domain, Sepaton, Diligent Technologies and FalconStor sell virtual tape libraries (VTLs), EMC and Network Appliance sell massive disk arrays, and Riverbed sells WAN optimization. Their sales forces wouldn’t know LTO-4 tape from masking tape.
Then there is Quantum, which after gobbling up rival ADIC last year will sell close to $1 billion worth of tape products this year. Quantum CEO Rick Belluzzo isn’t buying into the “tape is dead” line you hear from most de-duplication vendors.
“Tape will continue to have an important role,” he said. “Very few customers are looking to go tapeless.”
Quantum won’t be the only tape vendor selling deduplication devices for long. Overland Storage will come out with its deduplication appliance soon. Still, most deduplication vendors disagree with Belluzzo about the long-term future of tape. Sepaton spelled backward is “no tapes,” and the company was built on the premise that tape is going away. So was Data Domain, and Slootman says when Data Domain sells appliances, “We replace tape in almost every instance.”
Belluzzo said that’s because Data Domain sells to remote office and midsized companies. Quantum’s strategy is to push into the enterprise, with the DXi7500 enterprise system coming in a few months to go with Quantum tape libraries. He says it doesn’t have to be one or the other in large shops.
“I hear our competitors say, ‘It’s clear that tape is dead.’ That has no credibility with customers,” Belluzzo said. “We still sell tape. We see tape replacement along the edge, where they collect data and replicate it to the data center. But tape plays a critical role in centralized data centers and consolidated SAN backup schemes. The whole story is, in midsized and enterprise data centers, people are buying disk and tape together.”
Quantum claims 120 customers for its DXi de-duplication appliances over the last six months. Market leader Data Domain has about 400 over that same period.
Another area where Quantum does a balancing act is with its de-duplication patent. With deduplication’s popularity rising and other vendors looking to get into the act, Quantum could license its technology and let others sell it. Data Domain paid a $5.4 million royalty for the patent earlier this year. And Quantum is suing Riverbed for patent infringement.
Belluzzo said Quantum is a product company, so licensing its technology takes a back seat. He won’t rule it out, though.
“It’s always a balance you face: do you hold onto it and let the market work around your, or do you exploit it for commercial purposes and let the market come to you?” he said. “We’re trying to balance that now.”
I have yet to get a letter from an institution with which I do business that starts like this:
Dear Current or Former PEIA, WVCHIP, or AccessWV Member:
We are writing to you because of a recent data security incident. On October 16, 2007, a mainframe computer tape containing your and your dependents’ name, address, and social security number was reported as lost by United Parcel Service (UPS) while en route to PEIA’s data analyst.
But the longer I stay on the storage beat, the more I feel like the day is coming. Continued »
Hitachi GST is back at it again this week with another update to its disk drives, this time with a redesign of its desktop SATA and PATA drives for power efficiency. Hitachi claims the updates to its silicon on the new Deskstar P7K500 drive can reduce the drive’s power consumption by up to 40 percent–or down to as low as 6 watts when active and 2 to 3 watts while idle.
The new specs were accomplished in a couple of different ways, one of which is the use of a new system on a chip model for power modules, and changing the power regulator on each drive from a linear architecture to a switched one. The moves were made with the new Energy Star 4.0 spec for PCs released in July, which allots a “budget” of 50 watts in idle mode for the whole system while idle, of which it’s estimated 8.3 go to the disk drive. With the new 250 GB version of the Deskstar, Hitachi is claiming a draw of 3.6 watts in idle mode, and 4.8 watts for the 320, 400 and 500 GB models.
This savings won’t necessarily make a dent in anyone’s home electric bill, according to Lee Johnson, 3.5-inch Product Marketing Manager for Hitachi. “But with the additional watts left over, PC makers can use that added wiggle room to design PCs with more RAM, more features on the motherboard, or a higher processor clock speed,” she said.
Hitachi plans to add similar power-savings technology to its enterprise-class drives, but IDC’s John Rydning says that may not necessarily be practical–nor lead to significant cost savings in enterprise disk systems.
“At the enterprise level there’s not a lot of impact on the overall system by reducing idle drive power draws,” he said, noting that turning drives completely off through MAID is the way the enterprise is headed. “But if you’re a large enterprise organization with hundreds or thousands of PC workstations, this might make a difference.”
Last week, I met with more vendors and was briefed on more new technologies than I thought possible in a 3-day period at Storage Networking World (SNW) in Dallas, TX. However, now that I am back in the comforts of Omaha, NE, (if one can ever call Omaha comfortable), here are some of the briefings and interviews that I found to be the most interesting. And some that I thought were totally unremarkable.
Sun’s director of storage marketing, Dave Kenyon, and I met under the pretense of doing an interview for an upcoming article for Storage magazine on VTLs that manage disk and tape. But, whatever Dave was on during our interview, I need to get me some of that. I’m guessing Dave was up all night with the SNW crowd and his coffee was just kicking in when we sat down for our 9:00 am interview on Wednesday morning, because he let it rip. From blasting how backup software manages disk to wondering aloud why open systems vendors and users fail to learn the same lessons that the mainframe folks learned years ago, Dave solved backup’s problems (and most of the world’s) in the 30 minutes we met.
I also met with Isilon Systems’ director of marketing, Brett Goodwin. In the last year, Isilon Systems has gone from Wall Street darling and supposed NetApp-killer to a stock price collapse and whispers on the street that their product was having problems.
Brett explained that Isilon Systems had initially set earning expectations too high and then when Isilon System failed to meet lowered earnings expectations, they were promptly punished by Wall Street. As far as the rumors about their IQ product not working well, it was more a matter of Isilon’s VARs selling into accounts that they had little or no business selling into. Isilon Systems IQ series operates best when it is used in conjunction with video streaming applications, not in most business environments where random file access is the norm.
On the other end of the spectrum, I had a most unremarkable briefing with SeaNodes. SeaNodes provide clustered software that shares unused capacity on internal hard drives on Linux servers between Linux servers. Now, I thought this idea was dumb five years ago when a company named Monosphere attempted to do something similar for Windows servers. Monosphere has since seen the light and moved on to more intelligent pursuits, so I was dumbfounded that another company would try the same thing.
In SeaNode’s defense, at least they are just shooting for the clustered, high performance Linux server market that uses 500 and 750 GB internal drives where their aggregate of excess storage capacity on internal drives probably reaches the hundreds of TBs. However, users should only look at this technology if they are as geeky as the people who run clustered server computing farms and would rather be saving a few terabytes of storage than trying to figure out how they can squeeze time in their schedule to hit the golf course before the first snow of the season flies.
Not sure if I mentioned this before, but I’m a geek. I like blinking lights and shiny things. I do math and physics for fun. I’d chose a good computer magazine over Maxim. . .well, maybe not THAT much of a geek, but you get the point.
So what’s provoked my geekitude this time? SAS benchmarks!
My friend Karl and I go back and forth about SAS disk benchmarks. I follow him in his quest to get past the 200MBps ceiling on his desktop. I poke fun at his pursuit while secretly hoping he’ll find that right combo to break the 200MBps mark so I can buy it.
Further fueling my mental yoga over disks is the fact that SAS has invaded our server room at work like a plague. A good plague, but a plague all the same. I went to work one day and realized we don’t use SCSI in anything but our older legacy machines. Honestly, I love it, the performance of SAS drives is great, they are small (we use 2.5-inch SAS on IBM blades) and they don’t make as much noise or heat, don’t use as much electricity and have a reasonable capacity.
So what’s the problem? The problem is that I go home (well, sometimes, anyway) and I don’t have SAS at home, I have SATA.
Mind you, my SATA array sits behind an Areca 8-port RAID controller with 128 MB of cache on a PCI-Express based card, so it’s no slouch. But it’s not SAS, not by a long shot.
I now. . .must. . .have it. I neeeeeeeeeeed it. I don’t care what body part it’ll cost me! I want the speed and lightning response I get when I click the start menu or do some data migration chore on a SAS-based machine.
Vendors are now offering SAS cards with no RAID 5 or write cache available for about $150. The drives are about $250, which makes a small array at home not out of the question. (I just have to come up with a compelling argument to submit to the home finance committee. BTW, consider this an official cry for help to come up with an argument that will avoid the dreaded giant red “Denied–resubmit in 90 days” stamp the chair of said home finance committee has in her possession.)
But while trying to come up with this argument, it hit me. Traditional SCSI is dead as a doornail, and I missed the funeral.
In the meantime, if SATA ever slows down in its capacity growth, it had better look out too.
If I’m willing to sacrifice a bit of space for the speed, who else out there is willing to do the same? A decent capacity SATA disk will run you $200; a 150 GB Western Digital Raptor (10k rpm SATA) will run you $220. So why bother? Why not spend the extra $30 and get SAS? The controllers are about the same cost now for quality brands, the cabling and power envelope are roughly the same, acoustics on the 2.5-inch drives are not bad and the thermal footprint is not outrageous.
And there’s a downside to size. How long would it take to rebuild a RAID 5 or 6 array made up of 4 TB drives ? How would I cope if I lost 4 TB of data?
My future holds a 32 GB to 64 GB RAID 1 solid-state disk for my OS, with capacity SAS for the 3 TB that Office 2010 is going to take up. IBM has already released a 16 GB SSD for their blades with the 32 GB models soon to be widely available. Not only that, but you can set them up in RAID 1. (Every time I say “RAID 1 SSD” I have to giggle.)
Can someone give me an irrefrangible (Thanks for the SAT submission! More more!!!) argument why SAS will not someday soon be the SATA of today?
Notes from last week’s Fall Storage Networking World. . . .
LSI’s Engenio was the first systems vendor to make the jump from 2 Gbit to 4 Gbit Fibre Channel in late 2005. But LSI won’t have 8-gig systems when HBAs become available from the likes of Brocade, Emulex and QLogic around the middle of next year.
“We believe in 8-gig Fibre, but we’ll be there with the market, not ahead of it,” said Phil Bullinger, general manager of LSI’s Engenio storage group. “I think it will be a slower roll [than 4-gig] in the market.” . . . .
Encryption is usually the first thing that comes to mind when people think of storage security. But TD Ameritrade security architect Alan Lustiger warns that encryption doesn’t do much good to protect sensitive information in databases from hackers. Lustiger said hackers gain access to storage by getting in through security holes in the network.
“If the bad guys are going in the same way as the good guys, then encryption hasn’t bought you anything,” he says. Lustiger says that securing the operating system and Web servers is more important than encryption. “If you do nothing else, lock the front door,” he said, referring to Web servers. . . .
Two of the founders of Cisco-backed Nuova Systems made an appearance at SNW, but left without shedding any light on the company’s product line. Nuova’s marketing vice president Soni Jiandani and senior fellow Silvano Gai joined QLogic and Network Appliance at a Fibre Channel over Ethernet (FCoE) news conference but limited their discussion to the technology itself.
“We’re not launching products or the company,” Jiandani said when asked what Nuova’s role in FCoE would be. “We just wanted to speak about Fibre Channel over Ethernet and 10-gig.” . . . .
H3C, a subsidiary of 3Com based in China and the leading IP SAN vendor there, came to SNW to launch new products but does not intend to sell them in the U.S. “We are here to look for U.S. partners but not to sell systems here,” said H3C’s president Arthur Lee.
H3C’s upcoming products include a high-end IX3000 series that supports SAS and SATA, and will eventually include a Fibre Channel interface – although no Fibre Channel-only systems are on H3C’s roadmap. H3C’s U.S. partners include FalconStor and Intel. . . .
Venture capitalists need to find the next hot technology long before it becomes the next hot technology. So what are the VCs looking at now? Storage services, says Charles Curran, general partner at Valhalla Partners.
“We like storage services,” he said. “There’s a rapid growth of Internet storage, video, and those types of applications and people are looking for services to manage them.”
Services are hardly new, though. Curran said he’s still looking for the type of emerging technology Valhalla identified when it funded companies like LeftHand Networks to take advantage of Ethernet storage and Sepaton for its early position in disk backup.
“I’m trying to find the next tornado,” Curran said. “We don’t have one now.” . . . .
Comings and going: Former BlueArc CEO Gianluca Rattazzi has a new software startup called MaxiScale. The company is in stealth mode, but raised $12 million in funding last March. Other MaxiScale execs included former Attune Networks CTO Francesco Lacapra and former Attune Systems vp of marketing Dan Liddle. . . . Storage veteran Larry Cormier, recently with defunct data classification startup Scentric, now heads marketing at iSCSI vendor LeftHand Networks.
The AP has reported that execs in Cisco’s Brazilian business unit have been arrested on suspicion of smuggling and tax fraud. Yikes.
Add this to the “storage / networking police blotter” over the last year, which has included such bizarre cases as the HP pretexting flak and a more recent instance of a NetApp manager accused of embezzling travel funds.
Anybody care to start a betting pool on which people from which big company will show up in the news next? It could be like a game of Clue…“EMC, middle managers, jaywalking, in New York City!” “IBM, Board of Directors, unpaid parking violations, in Research Triangle!”
Okay, maybe that’s just me.
Hitachi GST says it anticipates that by the year 2011, it will be able to pack 4 TB of data onto a SATA drive and 1 TB of data onto a 1.8″ notebook hard drive.
The drive maker is basing these predictions on a new drive-head design, which it is unveiling at the 8th annual Perpendicular Magnetic Recording Conference this week in Japan.
The new design incorporates perpendicular recording, which places bits on end rather than side-by-side on disk to increase density, as well as the principle of Giant Magnetoresistance (GMR), a breakthrough in magnetic materials science that earned its discoverers the 2007 Nobel Prize in physics. Simply put, GMR refers to the fact that very thin films of metal can be highly sensitive to magnetic changes if the films are in the presence of a magnetic field. GMR was discovered in 1988 near-simultaneously by Albert Fert of the Université Paris-Sud in Orsay, France, and Peter Grünberg of the Forschungszentrum in Jülich, Germany; the two men share the Nobel this year.
GMR has led to advances in drive density since 1997, according to John Best, chief technologist of Hitachi GST. But, what the company is announcing today is a new twist on the magnetic field part of the equation — a concept called Current Perpendicular to the Plane, or CPP-GMR. The new Hitachi drive head design runs the electric current vertically through the drive head, allowing the current to pinpoint ever smaller areas of the disk surface. This will allow the head to read drive tracks as close together as 30 nanometers. Today’s densest drive tracks are 70 nm or greater; 30 nm would yield the 4 TB size Hitachi is projecting for a 3.5″ drive.
From here, however, the challenge will be consistently mass-producing both drive heads and drive platters at that density — and drive substrate materials will still need several more years to catch up. “It’s one thing to demonstrate a few heads and another to efficiently mass-produce them with reliable yields,” said John Rydning, research manager for hard disk drives at IDC. “What you have to remember is that Moore’s law refers to certain characteristics of the semiconductor manufacturing process, and hard drives are already at the very forefront of semiconductor manufacturing technology.” It will take several years for drives to catch up to the capabilities of these new heads, he said.
And maybe it’s time to ask, how big can drives get? 1 TB SATA drives are already causing systems makers to rethink RAID; what will 4 TB drives mean in terms of reliability and data protection? It’s an unknown right now, Rydning said, but he predicted the advance will more commonly be used to make big drives physically smaller, rather than denser. “Think about it,” he said. “We don’t have 5 and a quarter-inch drive sizes anymore.
Xyratex might not be a company known to many end users — their major business is selling storage subsystem hardware to OEMs. But, if you’re concerned about the power draw on your storage system, you might want to start paying attention to who’s under the covers. Xyratex has announced a new version of its array called the OneStor Extensible Storage Platform (ESP) 4U24, which is the first in a new line of arrays offering new features that the company claims will make it easier for OEMs to integrate application-specific software onto its hardware platform.
More interesting to non-OEMs, though, are some of the things Xyratex is doing with this new box to decrease its power draw and make it more efficient. First, it’s now offering a low-power mode for inactive disks that can be controlled through software, but fail over to hardware controls on the device’s midplane in the event of a failure. Xyratex is also offering OEMs an API for shutting disks off and powering them back up again — setting the stage for MAID arrays.
But for this reporter, more notable is the fact Xyratex is one of the first enterprise array vendors I’ve heard of to announce the elimination of power conversions within the silicon on the box itself. This is something Google and experts on the server side have highlighted as a major cause of energy inefficiency with computer systems. (In fact, we did a whole story on this issue back in June.)
In many data centers, converting between alternating current (AC, or wall power) and direct current (DC, or battery power) takes multiple steps. This results in some loss of power efficiency.
Within computers themselves, power is converted down to low voltages for powering individual computer parts, and within today’s enterprise systems, typically the conversions are between +/- 5 volt and 12 volt switches. Different factions have different opinions on which of those voltages should be kept, but Xyratex has eliminated the +/- 5 volt conversion from its new box.
Look for similar changes from other storage vendors to come soon; on the server side, they’re already well ahead of the storage market. There, the environmental energy technologies division at Lawrence Berkeley National Lab (LBNL) has been working with server vendors, the U.S. Department of Energy (DoE) and the Environmental Protection Agency (EPA) toward finding a standard single voltage for server hardware.