The NetApp/Sun patent infringement lawsuit that started roughly 10 years ago (or so it seems) continues on, first with this latest dispatch from Sun’s general counsel, Mike Dillon, who was gleeful over the results of a Markman hearing in the case of Sun vs. NetApp.
A Markman hearing is designed to settle on agreed-upon definitions of technical terms in a patent-infringement lawsuit. Sun and NetApp each submitted their interpretation of the meaning of various terms under dispute, like “RAID” and “domain name.” These interpretations are called constructions.
According to Dillon,
In dispute were fourteen phrases in seven patents (four asserted by Sun and three by NetApp) that required the court to determine the meaning of terms like “Domain Name”, “Non-volatile Storage Means” and “Root Inode,” among others. Given the complexity, we were impressed when only two weeks later, the judge issued her order.
And, we were very pleased.
In summary, the court agreed with Sun’s interpretation on six of the disputed terms (two of which the court adopted with slight modification) and with NetApp on one.
While this is obviously more in Sun’s favor than NetApp’s, a Markman hearing is a pre-trial procedure. Agreeing with Sun on the terms doesn’t mean there was a decision in Sun’s favor. However, the court did make one ruling dismissing one of the outstanding patent claims from NetApp, US No. 7,200,715, or ‘715 for short, which referred to RAID.
According to Dillon:
…the Court found each of the asserted claims in NetApp’s 7,200,715 patent relating to RAID technology to be “indefinite” – meaning that someone with experience in this area of technology could not understand the limits of the claimed invention. With regard to NetApp’s ‘715 patent, the court agreed with Sun’s position that the claims of the patent are flatly inconsistent with and impossible under the teaching of the patent specification. In effect, unless NetApp appeals and this finding is reversed, the ‘715 patent is effectively invalidated in this case and against others in the future. meaning that someone with experience in this area of technology could not understand the limits of the claimed invention. With regard to NetApp’s ‘715 patent, the court agreed with Sun’s position that the claims of the patent are flatly inconsistent with and impossible under the teaching of the patent specification. In effect, unless NetApp appeals and this finding is reversed, the ‘715 patent is effectively invalidated in this case and against others in the future.
While it’s good for Sun, the original defendant, to have this claim dismissed, both companies are seeking injunctions against one another’s products as well as treble damages, and, I would imagine, a contrite apology. A dismissal is good for Sun, but not a finding that NetApp violated its patents. This dismissal amounts to a finding that nobody can really patent something as ubiquitous as RAID.
Dillon also published updated results from the Patent and Trade Office’s (PTO) reexamination of the patents under dispute in this case. The PTO already found in Sun’s favor on one patent, ‘001, back in June. Sun requested that this patent be taken off the table in the dispute, and the Markman court documents don’t show any reference to it.
Now, according to Dillon, the trial court has agreed to remove ‘001 from consideration. Meanwhile, the PTO has also rejected NetApp’s claims on two more patents, ‘211 and ‘292. ‘292 being the one that refers to WAFL. Uh oh.
…late last week, we were informed that the PTO has rejected all of the asserted claims of this patent relying on at least two separate prior art references out of the many provided by Sun. (The examiner felt that to consider the other references would be “redundant”.)
Some may recall that the ‘292 (“WAFL” technology) patent was what NetApp’s founder, David Hitz, originally highlighted on his blog as being innovative and infringed by ZFS.
While not a decisive victory for Sun (the claims still have to be addressed in court, and a dismissal of the patent again does not amount to “winning” the countersuit alleging NetApp infringes Sun), it’s certainly nothing that could be called a victory for NetApp.
NetApp’s only comment to me about this so far is, “We’re very happy with the way these matters are progressing and we continue to read Mr. Dillon’s blog with great bemusement.”
EMC’s Mozy online backup has added a new Mac edition of MozyPro to its product line. This news follows the introduction of Mozy’s first Home edition Mac client in May.
The MozyPro for Mac product, which will be available immediately, adds centralized management features for Mac servers and workstations, including the creation of groups of clients and policies that control their backups. Management in MozyPro is “very fine-tuned,” according to Steve Fairbanks, director of product management for Mozy. “You can adjust backup sets and include or exclude file extention types according to policy.” Customers an also receive reports on backup job success rates, have alerts on failures sent to an administrator, assing backup quotas and administer roles with the new software. MozyPro for Mac will also be manageable through existing MozyPro for Windows management consoles for those who have a mix of Mac and PCs in their environment, Fairbanks said.
Both the Home and Pro editions have specific features to support Mac, including:
- Support of resource forks, alias and packages
- Spotlight Integration
- Native Cocoa Framework – Graphics are all native
- Mac Help System
- Menu Bar integration
- Native Apple Installer and installation process
- Mac-specific backup sets
One customer who’s been waiting for this rollout for quite some time is Walter Petruska, information security officer for the University of San Francisco. The University has MozyHome for Mac rolled out to some individual faculty and staff members, and the central IT department has been beta testing MozyPro for Mac for months. The plan is to roll MozyPro for Mac out to workstations used by the University’s distributed IT staff “so they can get a feel for Mozy from the client side,” Petruska said. However, the full rollout to all University servers and workstations will wait until there’s a MozyEnterprise edition for Mac.
EMC was coy when it came to whether or not there will be a MozyEnterprise for Mac, saying that MozyPro will meet most customers’ needs. But the MozyEnterprise edition that’s out now for PCs allows for more advanced management tasks like silent installs, deployment without software keys, and Active Directory and LDAP support fo security. Otherwise, Fairbanks said, “there’s very little difference” between MozyPro and MozyEnterprise.
To Petruska, however, the differences are significant. “We’re waiting to make the leap to a new backup paradigm across the University until things align and we can manage all PCs and Macs as well as servers from one console with LDAP and Active Directory integration,” he said.
Right now, workgroups and departments at the university have separate backup plans, and most of those backups remain on-campus in San Francisco, which is prone to earthquakes. Petruska said he’s looking forward to “getting everyone on the same sheet of music” and sending all backups offsite to the cloud. Most of the Mac users on campus today use EMC’s Retrospect software for local backups, but Petruska said MozyEnterprise for Mac would replace them.
Meanwhile, EMC says no edition of Mozy will replace Retrospect in its product line. Rather, according to a Mozy spokesperson, Mozy and Retrospect will be integrated going forward in packages like the one announced with Iomega’s external hard drives in July.
Witness the carnage at VMWorld of a booth giveaway gone bad…(VMblog)
And if I had to guess, I’d say it’s a new disk array. A self-healing, dynamically performance-optimized disk array.
For one thing, the latest fad is for new disk arrays to be promoted in what public relations pros call a “rolling thunder” fashion, where deliberately mysterious statements are made and glimpses are given of an upcoming product until the moment of its launch. See also: Xiotech’s ISE, Oracle’s Database Machine. HDS’s “to be named” is no exception.
More clues on the HDS preview website: “Hitachi + DLB = agile, no touch, no bottlenecks formula.” My guess is that DLB means dynamic load balancing, especially since, well, everything else on the site is about dynamic load balancing.
For example, click on “View video” and some dude walks up to you, saying:
Get ready. It’s coming. What if you could improve your service level agreements for virtually any storage workload? Like you, I want the perfect formula, minimizing I/O disruption and bottlenecks. But what would that formula be? I believe it includes purchasing the minimum number of required disks to meet the performance criteria of all requests. Automatic workload management and exceptional bandwidth. Now I would like to ask, what if I give you the ability to dynamically shift I/O processing to keep workloads running smoothly? Then, what would your ideal storage environment look like?
At this point three choices appear inside the video screen:
- Minimal manual intervention required
- Minimize the risk of degradation when shifting I/O processing
- Self-healing system to overcome failure of key components
Meanwhile, a countdown clock on the site reads 9 days, 16 hours, 53 minutes, 52 seconds. In other words, Oct. 13 — the first day of Storage Networking World.
Around here, the scuttlebutt has been strong that HDS is prepping a new AMS (Adaptable Modular Storage) midrange array. The high-end USP has already gotten a couple of recent refreshes, including a mini-version, as well as a software update; it would make sense for HDS’s midrange arrays to be up for a revamp next.
All the talk about Fibre Channel over Ethernet (FCoE) over the last year has raised questions about the future of iSCSI storage once the convergence of FC and Ethernet takes place.
But Hewlett-Packard’s $360 million acquisition of LeftHand Networks proves that HP agrees with its rival Dell that iSCSI SANs are here to stay. Dell paid $1.4 billion for LeftHand’s iSCSI rival EqualLogic in January, and has ridden a mini-wave of iSCSI adoption: IDC said second-quarter iSCSI revenue grew 93.9 percent over last year.
While the acquisitions bring Dell and HP another storage platform and some product positioning issues, the vendors seem willing to let FC remain the dominant protocol at the high end while iSCSI adoption spikes among SMB and midrange shops due to growing interest in server virtualization and 10-Gig Ethernet.
Representatives of HP and Dell agree that history indicates FCoE adoption will be slow.
“The iSCSI standard was ratified in 2003, and here weare in 2008 just getting traction,” HP StorageWorks CTO Paul Perez says. “I think FCoE will follow a similar adoption curve and adoption will be slow. iSCSI will have a prominent place, especially with 10-Gig Ethernet. FCoE is a performance fabric, while iSCSI is a general purpose fabric.”
Dell vice president of marketing John Joseph, who was with EqualLogic before the acquisition, says iSCSI finally has momentum.
“Migration on and off technologies by storage customers is extremely slow,” he said. “It’s a helluva lot slower than watching paint dry. Typical adoption curves are measured in five-to-seven-year increments. We’re still in the early years of [iSCSI’s] adoption phase.”
Joseph says while he expects many FC SANs to migrate to FCoE, Data Center Ethernet and 10-gig Ethernet will erode the FC base and lead more storage shops to iSCSI.
“Ten-gigE makes a lot of objections [to iSCSI] go away, and Data Center Ethernet makes even more objections go away,” he said.
Last week I wrote about some confusion I had regarding IBM’s virtual storage optimizer (VSO) for VMware Desktop Infrastructure (VDI), especially after I was told by a VMware official that the IBM product, credited to an internally-developed algorithm, was based on VMware’s Linked Clone API.
I wrote to one of the researchers involved and got a response through IBM’s PR spokesperson that:
- The IBM-developed algorithm is based on VMware API available in Virtual Infrastructure version 3, not the VMWare LinkedClone API. Specifically, the algorithm uses VMware Infrastructure SDK 2.5.0 as documented at https://secure.techtarget.com/exchweb/bin/redir.asp?URL=http://www.vmware.com/support/developer/vc-sdk/ and file system level access on ESX servers.
- We developed the algorithm based on the API that was publicly available and supported at the time that we began development efforts
- VMware can provide detail on the differences between the APIs in Virtual Infrastructure version 3 and VMware LinkedClone API
So far no response from VMware.
Regardless of what API was or was not used, what I am trying to get at is the functional difference between these two products, if any. If there is one, it’s important for users to know about. If there isn’t one, it speaks to the growing convergence between VMware’s virtual infrastructure and storage vendors’ value-add software.
the bottom line right now seems to be that IBM’s product is for existing IBM customers, since it requires professional services through IGS. There are some shops that need the IBM label before they buy, and so VSO could at least be a fit for them.
Appreciate weigh-ins from IBM, VDI, and / or VMware experts.
Backup vendor Quantum and WAN optimization specialist Riverbed Technology dropped their respective data deduplication patent lawsuits against each other this week, with Riverbed agreeing to pay Quantum $11 million. Both sides dropped all claims and agreed not to file more data deduplication patent suits against the other.
The legal scuffle began in October when Quantum charged Riverbed with infringing on a dedupe patent granted to Rocksoft in 1999 and later acquired by Quantum through an acquisition. Riverbed countered in November with its own suit, charging that Quantum’s dedupe products infringe on a Riverbed patent.
From here, the settlement looks like a draw. Quantum got paid, but didn’t get Riverbed to stop using the technology in its WAN appliances as it tried to do in its suit. Such a judgment would have cost Riverbed at lot more than $11 million.
SMB backup and DR vendor Unitrends has released version 4.0 of its RapidRecovery management software for its Data Protection Unit disk-to-disk backup hardware. The new version completes a yearlong effort from Unitrends to bring together what were once separate GUIs for managing backup and offsite vaulting using the DPU devices.
A year ago, the company removed the command line interface, which CEO Duncan MacPherson described as “a late ’90s level GUI that looked old and slow.” At that time, Unitrends gave backup and configuration management interfaces a facelift. The current release pulls in offsite vaulting and data recovery. Other new features include the ability to create customized reports based on the GUI, test DR plans, recover single files from a secondary site, and support for new operating systems including Novell Netware. MacPherson said Windows 2008 will be supported by the end of the year.
Unitrends’ goal is to package all data protection processes and hardware into one product. Combining operational backup and disaster recovery practices also seems to be an emerging trend. This is also being done through backup service providers whose backups by definition are offsite, and who are beginning to offer more affordable system state recovery of hosts using virtual servers. Stay tuned to the SearchDataBackup.com and SearchDisasterRecovery.com sites for more on this.
(Ed. Note: this guest blog comes from Siemens Medical Solutions storage administrator Jim Hood in response to the editorial in the July Storage magazine, Dedupe and virtualization don’t solve the real problem).
I was happy to see that someone finally acknowledged the root of some of the evils in the storage business. Your editorial, “Dedupe and virtualization don’t solve the real problem,” spoke to the heart of the matter: “The math is easy: More servers mean more apps, and more apps mean more data.” It cannot be spoken any clearer than that. I have been involved with storage all of my 27 years in IT from the early ‘80’s until now spanning mainframe and open systems and have seen the amount of data expand exponentially. I wish my retirement fund had the same growth curve.
In our business, we continue to satisfy our hosted mainframe customers’ needs with relatively small amounts of data (our bread-and-butter apps in zOS use customized VSAM [Virtual storage access method] files hardly over the “4-gig limit” to provide databases for hospital clinical applications) while similar applications on Windows stretches the imagination – mine at least. As someone who has lived through this transformation and now has to support the backup processes for our open system business, the amount of data we handle makes my head spin.
It isn’t unusual for us to process 25 TB of backup data every day (because we use Tivoli Storage Manager, this consists of only new or changed files). We have accumulated over 2 PB of capacity in our backup inventory. I don’t see it getting any less even though we have an active relationship with users, and encourage them to look at what they backup and how long they retain the backup data. The volume just keeps growing.
With all the technology at our disposal, the industry does not seem to want to address your basic math problem. I believe we live in an age where both technology and its pricing have brought us to a point where “creating data is cheap” — so cheap that there is no turning back. We seem to have lost the thought processes associated with data management: how many files, file size, other data spawned from these files, where does the data reside, what data should be backed up, etc.
I’m not sure, going forward, how to make it appear as though storage costs are kept relatively level while at the same time incurring new costs for hardware, software and people to manage this growth. In our environment we pass on expenses by using a chargeback system, but pressure from the user base (application development) to reduce their costs from one fiscal year to the next usually translates to lower chargeback pricing while the real problem — too much data — persists. We can try to dedupe and virtualize our way out of, this but somebody will have to pay for it.
To really address this problem will require, as you stated, “an awful lot of manual work,” but it will be difficult for many organizations to cough up the resource costs to do so. Let’s face it, that grunt work doesn’t generate any new revenue through new products. So again, it becomes a storage management issue rather than a data management solution.
My view is this: Twenty years ago we had a modest home with a one-car garage (mainframe) to keep all our stuff in. In the last decade we decided we needed more stuff — newer stuff — and moved to a larger house with a two-, heck, three-car garage (Windows). The reality of the economy and housing market is reshaping the world of real estate. I’m not sure what kind of “housing crunch” will be necessary to have us take a different look at how we create data. Getting people to do that would be a good first step in the right direction.
Finally, on a more humorous note, I think one of the problems is in how we refer to amounts of data. One TB is no big deal, right? How do I sell my problem to those who write the checks when I speak in terms of one or two of something? “So, Jim, you say you can’t manage your 2 PB easily!” or “What is so hard about managing your growth from 1 PB to 2 PB, come on, you only grew by one!” It is all about perception these days and by truncating real capacities, we diminish the true state of affairs. Sometimes I try to communicate the reality by simply changing the language: 2,000 TB makes a larger impact than 2 PB. Maybe we all need to begin speaking in larger quantities than single digits
EHS Storage Management
Siemens Medical Solutions
Until now, the storage industry has held up well this year in the face of any economic slowdowns – even those affecting the financial services sector.
But with the economy’s problems taking center stage in the U.S. this week, financial analysts dusted off their crystal balls and saw a gloomy future for storage vendors.There were a slew of stock downgrades and even more earnings reduction forecasts for storage and the IT industry in general this week. And almost every one was attributed to the general economy rather than specific company problems. Whether Wall Street or Main Street gets the worst of the fallout, the consensus is less money will be left to spend on technology.
As RBC Capital Markets analyst Tom Curlin put it in a research note this week:
” … our deceleration stance with respect to U.S. IT spending is evolving to a contraction stance. The credit markets continue to tighten and the flow of credit to consumers and corporations is contracting. The metrics we track to ascertain consumer and corporate buying power are also contracting. In concert, degrading employment and capital spending metrics do not bode well for IT spending over the next 12 months.”
Curlin added his research shows “a neutral tone” regarding business this quarter, “but greater concern about the forward outlook. Naturally, this concern has risen after the collapse of Lehman and the various aftershocks in the financial system. Thus far, enterprise storage demand is steady, whereas we sense server demand has waned in recent weeks.”
Curlin downgraded the stock ratings of QLogic, Xyratex, Voltaire, and VMware, and slashed 2009 earnings estimates and price targets of EMC, Brocade, NetApp, Seagate, CommVault, 3PAR, and Compellent. In each case, he cited global IT spending slowdowns.
In a note to his clients this week, Aaron Rakers of Wachovia reported that around 18 percent to 20 percent of enterprise storage revenue comes from the financial services industry. He estimates systems vendors EMC and Hitachi Data Systems and switch vendor Brocade generate around 20 percent of their revenue from financial services, with HBA vendor Emulex likely higher.
So it follows that serious trouble in the financial services industry threatens a good chunk of storage sales.
Also this week:
- Pacific Crest’s Brent Bracelin cited a forecasted slowdown in data center-related spending while cutting stock ratings for Brocade, Double-Take, CommVault, and QLogic and cut price targets or estimates on EMC, NetApp, 3PAR, HP, Data Domain, Emulex, and Mellanox.
- Morgan Stanley lowered estimates on PC hardware stocks because of decreased global demand, and dropped stock price targets for EMC, IBM, Dell, Hewlett-Packard and Cisco among others.
Ok, it wasn’t all bad news. Caylon Securities upgraded data deduplication specialist Data Domain’s stock based on solid results this quarter. It figures the one storage technology that’s growing these days is one that’s responsible for bringing about reduction.