Storage Soup


October 10, 2007  2:04 PM

CDP’s evolution takes shape

Maggie Wright Profile: mwright16

The evolution of the use of continuous data protection in companies is taking shape. BakBone Software’s inclusion of CDP as a new feature in its NetVault:Backup 8.0 release puts it in the growing number of products such as Asigra’s TeleVaulting and InMage Systems DR-Scout that use CDP to protect Windows and Linux servers.

The rationale for including CDP in backup is simple. Easy backup and recovery of standalone Linux and Windows servers remains a significant challenge for administrators. Companies still have too many of this class of servers with too few administrators, who are struggling to provide a cost-effective means to backup and recover this class of servers.

Using CDP as part of the backup client addresses this issue on several fronts. It replicates data to disk locally and remotely; it provides for fast point-in-time recoveries at any past point-in-time (typically 3 – 30 days); and by creating and keeping a complete copy of the data on disk on another host, administrators can manipulate this copy of data in multiple ways.

Continued »

October 10, 2007  1:16 PM

How to archive home directories that don’t have AD accounts

Tskyers Tory Skyers Profile: Tskyers

My name is Tory Skyers. Through circumstances not entirely beyond my control (!!) I have been deeply involved throughout my career in various types of centralized and distributed storage. Now, at the end of a long chain of events beginning in Long Beach, CA with Curtis Preston and some blinky magnets (I’ll let you use your imagination), I’ve been offered an opportunity to share some of my experiences and insight with you. 

I’ve been agonizing for a week on what to say for my first Blog. I think it has to be earth-shatteringly profound, so of all the catchphrases and tag lines I came up with, this seemed to sum it all up best:

Hello and thanks for reading my Blog.

What do you think? Just imagine a guy smiling ear to ear and waving at you from behind his keyboard :).

An admission: I’m absolutely fascinated by storage. The technology that goes into connecting computers and people to storage today was the stuff of science fiction 20 years ago. Pause for a second and take a look at where we are in storage: 1 TB hard drives, 55GB optical discs, 10Gb Ethernet, 4Gbps Fibre Channel, 3 millisecond seek times, 300MBps throughput… all these numbers add up to wow, at least to me. When I think about all the technology out there, I feel like that kid at the toy store window with my eyes the size of saucers, staring at the GI Joe with the Kung-Fu grip, and the Spiderman Hotwheels set.

Here in my little corner of cyberspace I’ll be blogging about some of those stare-inducing storage technologies from my perspective, which is that of a network administrator (and according to friends is sometimes “warped and twisted” by my own particular brand of logic. I’ll also be touching on the ennui (SAT word I’ve been dying to use in a sentence) that I see creeping into the market. Check back from time to time and let me know if you agree with me. (And dig out that old SAT prep book while you’re at it–send me a word or two I’ll see if I can roll it in.)

One last thing: I gave a presentation on mobile storage at the recent Storage Decisions show in New York, and at the end of the presentation I mentioned a few scripts I wanted to share with the attendees. Below is a copy-and-paste of a simple script using ADfind from Joeware.net to archive users’ home directories that no longer have Active Directory accounts. This script can certainly be more elegant so feel free to expand, expound and extend.  There are a few things on the “to-do” list for it: first, make it self-contained and not need an input file (i.e., do the AD query using ADODB or something similar). Second, provide logic to validate permutations of a username or directory. Third, be a pretty HTA (HTML Application). I’m working on migrating this script to Powershell.

The code is below the jump. Copy it out using notepad (not wordpad) or some script editor and save it as a .vbs. Run it from the command line with an input text file with one username per line. You’ll need to insert specifics for your environment like domain names, etc. 

Again, thanks for the read! Continued »


October 4, 2007  3:33 PM

Sun has more ZFS visions

Beth Pariseau Beth Pariseau Profile: Beth Pariseau


Sun’s campus in Burlington, Mass. Photo by Beth Pariseau.

Today found me at Sun’s campus in Burlington, Mass., one of three cities across the world where Sun was holding “virtualization chalk talks” with press–the others were San Francisco and London.

Why this big fanfare? Sun’s coming out with its own version of a server virtualization hypervisor based on open-source code from Xen and its LDOM “container” offering for UltraSparc servers. Rumors have been circulating about it recently and Sun wanted to clarify its vision.

So what does this have to do with storage? My question exactly. The answer: at this point, probably something, but Sun’s a little light on the details. ZFS was mentioned, of course, as the underlying filesystem for virtual machines running on its xVM Server.

One of the VPs presenting today, Sun’s Connected Systems group leader Steve Wilson, also pointed to the example of the Texas Advanced Computing Center (TACC), which is running a 4000-node IBM blade server farm attached to Sun’s big honkin’ Magnum InfiniBand switch (3456 ports) and a grid of (guess what) “Thumper” SunFire X4500 servers for storage.

This all ties in–vaguely, at this point–with the announcement from Sun on Monday that it’s melding its server and storage groups. Sun’s futuristic vision is that  server processors and Ethernet pipes are catching up to traditional storage subsystems and Fibre Channel fabrics in terms of performance and scalability. It’s essentially a different twist on Sun’s “the network is the computer” mantra that suggests “the server is the SAN.”

Meanwhile, Sun is announcing a software product to go along with xVM called Operations Center, which will manage both physical and virtual server systems in one console. Eventually, when the lion lies down with the lamb, and grid data centers run in server / storage peace and harmony, said software (theoretically) will also manage the Thumper grids everybody’s going to be using for storage.

That’s the idea, anyway. The roadmap is, to put it mildly, unclear. Sun spent most of its time today talking about its new server virtualization technology, and aside from ZFS and Thumper, no specific storage products or services were mentioned, nor was any time frame given for the extension of Operations Center to storage systems, or when or if Sun’s virtualization vision will extend to incorporate traditional storage subsystems.

As always, when it comes to Sun and storage, ZFS is the glue, the linchpin, the centerpiece and the mantra. ZFS is what makes Thumper tick, it’ll be the file system underpinnings for Sun’s virtual servers, and it apparently will also be key to this “data center of the future” Sun is planning out.

But there’s an elephant in the room. “What happens,” asked this reporter of Sun’s officials in Burlington, “if NetApp wins its lawsuit?” (For those of you unaware, NetApp has filed a cease-and-desist lawsuit against Sun claiming ZFS violates NetApp’s NAS patents. The two may eventually drop their posturing and come to some sort of cross-licensing agreement, but as things stand right now, a NetApp win means not a licensing deal but a mandate to stop distributing ZFS altogether, period.)

You could’ve heard a pin drop when the question was asked today. Finally, the response was, “We are not going to talk about ongoing litigation.”

Reading between the lines, though, it appears that the rest of Sun is following along with Jonathan Schwartz’s somewhat nonchalant attitude toward the lawsuit–the message that seems to be coming from all this is that Sun’s confidence in its legal position is such that it doesn’t feel compelled to hold off one bit with ZFS. But at the same time, it does beg the question of what could happen as they move forward with a storage product strategy that is so heavily dependent on a disputed piece of IP.


September 21, 2007  8:40 AM

Storage newcomers deliver on predecessors’ promises

Beth Pariseau Maggie Wright Profile: mwright16

CDP and DPRM software are relative storage newcomers, but they may be the software that finally delivers on the promises of their SRM and storage virtualization software predecessors.

Storage resource management (SRM) and storage virtualization software have taken their turns sharing the storage spotlight over the past few years but have, for the most part, largely failed to deliver on their promise. Though companies may use them in some tactical way, such as doing LUN masking, fabric zoning or data migrations, neither has really delivered the simplified, automated storage management environments that vendors promised and customers hoped they would.

Working for a company that tried both, my company saw the strategic value that both SRM and storage virtualization software could deliver but never could figure out a way to turn that promise into a reality. For when push came to shove, it became almost impossible to find a risk-averse and profitable way to transition from Excel-based FC SAN management to SAN management based on the use of these two software tools.

What my company needed, and what is still needed, is a method to segue from FC SANs managed by Excel spreadsheets to the introduction of SRM and storage virtualization software without a rip-and-replace strategy. So, it was while I was evaluating the latest generations of data protection and recovery management (DPRM) software and continuous data protection (CDP) software that I may have stumbled across a way for companies to make this transition.

Companies usually bring DPRM software in-house to report on the success and failures of backup jobs. Though DPRM software still does that, DPRM software is quickly expanding to monitor and report on other components of the backup infrastructure, including server performance, fabric switches and virtual and physical tape libraries. Though the impetus for offering these features is to better troubleshoot systemic problems in the backup infrastructure as well as do capacity planning, companies are inadvertently using DPRM software in much the same way SRM software was intended.

A similar pattern is emerging with CDP software at the high-end, with products such as EMC’s RecoverPoint, HP’s CIC and Symantec’s CDP/R. These CDP software appliances install into FC SAN fabrics and operate just like the original FC SAN-based storage virtualization products except CDP appliances journal all writes and are only used when production storage fails. But other than these characteristics, they are essentially the same as the original FC SAN-based storage virtualization appliances.

The reason users are now willing to introduce either CDP or DPRM software into their production environments is that they no longer feel like they are risking their production applications or stretching their budgets for products whose value proposition is dubious. CDP and DPRM products solve immediate corporate pain points, are justified with existing dollars and require less risk – a win for both the vendors and the users.

Now the question is, will CDP and DPRM software eventually evolve to assume responsibilities that their SRM and storage virtualization software predecessors never really delivered on in the minds of customers? My guess is yes.


September 20, 2007  12:15 PM

At this rate the world will be green in a decade

Ndamour Nicole D'Amour Profile: Ndamour

Not a day goes by that I do not hear from yet another storage or server vendor that their offering, whatever it is, is green. This mania started about a year ago in earnest. Prior to that, the green movement was pretty much restricted to organizations outside of the computer industry. So, what is really going on? What has caused every software and hardware company to suddenly formulate a green message.

I have a theory, and you heard it here first. I think there is a fundamental grassroots movement towards green that has started in the U.S. and it is picking up momentum like no other I have seen in thirty years. This moment is more powerful than the Presidential elections and other important matters facing the country. It is bigger than Exxon and Mobile. It is bigger than GM and Ford. For years, the debate has been raging about global warming. No matter which side of the debate you place yourself, the green movement has begun. And because now it is becoming fashionable, every company in every industry will feel the need to do something “green.” I believe we are now in the phase 1 of this movement. In phase 1, each company takes stock of what they have in their product line and extracts what they can of a green message. Granted most, if not all of these companies had never thought of any of their products in terms of green before. Not in product development and not in marketing. Of course, good design practices prevailed and many resulted in lower power usage or smaller packaging but they were hardly ever viewed within the context of green. So, in Phase 1, what we are seeing is a recasting of the company message incorporating green.

I see it every day. Sometimes I laugh when I see a storage company twisting and turning its message to incorporate green. Even to the point that more than one company has stated to me that they are so green that even their logo has green in it. Give me a break. The logo was done years ago when green was equated to the color of a person’s face when they saw a ghost.

But I frankly don’t care.

I am thrilled just to see the storage companies participate in the green movement. So what if 75% of what I see today is recasting of a message relating to an older product. So what if Manhattan’s energy and space crunch started the ball rolling. I think once the company is committed to the green message they will design their next product accordingly. They can’t escape it. That is why I believe the green movement will have a genuine impact in the next five years. Let the companies play the game. Play along with them. Give them slack for now. Because once they are in, they are in. I love it.

Before you think that there is nothing real in the products today let me restate something. There are technologies that have hit the market in the past three years that are making a serious green impact. Data deduplication is one such technology. It has hit the market on the secondary storage side first, that is, applied to backup/restore and archiving markets. When used in appropriate ways, one can reduce the amount of disk required by a factor of 20. No matter which way you look at it, 1 TB of storage uses a lot less power and requires a lot less cooling than 20 TB. Thin provisioning is another good example. I chose these examples to illustrate a point: it is not simply hardware technologies that deliver green. In fact, at Taneja Group we believe software will play a huge part in the greening of storage and servers. Not to say that hardware wouldn’t play a role. Look at Copan’s MAID technology, for instance. Or IBM and HP’s blade server technology. New techniques for airflow through racks, nanotechnology and new data center designs will all contribute. But, we believe the impact of software technologies will dominate, especially with installed hardware.

Green will soon become a competitive advantage. Because of the financial implications, real change will occur. Soon even the Mobiles and the Exxons will have to yield to the pressure. That is how strong grassroots movements are. I believe the time is here. And I couldn’t be happier.

Note: Recently Taneja Group wrote a Technology In Depth paper on this topic. If you would like a copy please send a request through www.tanejagroup.com.


September 18, 2007  10:51 AM

Web 2.0 companies get deeper into data storage, email SaaS

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Yahoo bought Zimbra Inc.  today for $350 million. The New York Times reports that the acquisition is meant to help Yahoo better compete with Google, and its GMail service, which Google has begun to offer to businesses this year.

We’ve covered GMail quite a bit, both on the main news site and this blog. We covered the launch of Google Apps for Enterprise and its 10 GB inbox, and then spoke with one early adopter of the Apps on how he’d used it to save time and money on email storage. We’ve also discussed some of the “gotchas” with software as a service for enterprises, and fielded Google’s response to those points. Finally, we’ve seen Yahoo peeking over Google’s shoulder a bit, with its announcement of unlimited inbox capacity for its webmail.

Shortly after enterprise storage experts started questioning the security and compliance of Google’s offering, the company went out and bought an enterprise archiving player, Postini. By that time, analysts were remarking that software as a service, particularly for email archiving and backup, is officially back. “People now will say, ‘oh, no one’s going to get rid of Exchange’,” ESG analyst Brian Babineau said at the time. “But it’s a generational thing — newly graduated employees are joining businesses from college already standardized on Gmail, and many corporations are saying, why manage Exchange when employees are already used to this Web-based, outsourced interface?”

It appears Yahoo sees the same writing on the wall. Today, like Google, they purchased another company to bring in some enterprise-level expertise for their email SaaS.

The storage market is already a little bit familiar with Zimbra. It’s an open-source messaging system, so most of the product’s features have little to do with storage, per se. But Zimbra also began announcing archiving customers recently, including an ISP in Dallas, Texas, that plans to offer Zimbra archiving to its customers in local K-12 school districts.

As with Postini, the company that Google acquired, Zimbra’s software has most of the features on the enterprise email archiving checklist, including automatic .pst file discovery and migration, and the ability to index, search and export messages or mailboxes for e-discovery and compliance purposes.

Where it differs from other products is the fact that it can support multiple email systems, including Exchange, Lotus, Domino and GroupWise, as well as its own email application, and can support messages from any combination of those applications in the same repository, the company claims. A disadvantage for Yahooo, meanwhile, is that Zimbra’s archiving product is relatively unproven in the market, having just become generally available July 23.

Interestingly enough, Google had an announcement of its own today–the release of Google Presentations, a Web-based competitor to PowerPoint. Clearly, Google is going after Microsoft hard, but over in this little corner of the IT market, I’m having a bit of a chuckle today–in responding to the “gotchas” in a Q&A with Storage Soup back in April, Google Enterprise product manager Rajen Sheth told me the following:

We’re definitely not trying to duplicate Microsoft Office. The way I would think of it is that Office is very well designed for individual productivity–an individual preparing something to present to a group of people. We’re focusing Google Docs and Spreadsheets on collaborative use case scenarios.

We’ll never know if Google saw an opportunity and changed its mind or if the market taking off influenced its decision to release Presentations. But one thing is clear as this trend continues, with reports also out this week that Facebook is contemplating throwing its hat into the application-storage ring as well: Web 2.0 giants are fast becoming the successors to Microsoft and IBM as the dominant force in computing of the 21st century. The more news I see like this, the more I’m inclined to side with Babineau–the times, they are a-changin’. 


September 14, 2007  7:49 AM

New data backup SaaS players emerge

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Once upon a time in the storage market, storage service providers were all the rage. Then, the tech bubble burst and most of them went the way of the dodo bird.

But with storage growth in recent years forcing companies to consider new strategies for managing data, storage service providers are making a comeback. Within that market space, meanwhile, backup and recovery is the most popular area, as users struggle with the cost of protecting more and more data, the distraction of backup and recovery management from core business and IT operations, and ever-increasing regulation.

Naturally, this is the market where the lion’s share of new players are springing up. EMC Corp. and Symantec Corp. are among the heavy hitters that say they’re planning backup SaaS. But there are also some new and emerging vendors that are gaining attention in the market with the return of interest in outsourcing.

One of the companies that’s made its presence known in recent weeks is Nirvanix Inc., which is aiming to be a business-to-business outsourcer for large companies. It’s come out of the gate overtly challenging Amazon’s S3 service, saying it can overcome the performance issues that have been reported by some large S3 users. The service is also offering a 99.9% uptime SLA to customers.

Nirvanix claims it can offer better performance because it is constructing a global network of storage “nodes” based on the way Web content delivery systems work — by moving the storage and processing power closer to the user, cutting down on network latency.

Within each of Nirvanix’s storage nodes is a global namespace layered over a clustered file system, running on Dell servers residing in colocation facilities. These nodes also perform automatic load balancing by making multiple copies of “popular” files and spreading them over different servers within the cluster. With this storage infrastructure, the company is claiming that it can offer users a wider pipe as well as a faster one, allowing file transfers of up to 256 GB. Moving forward, according to CEO Patrick Harr, the company plans to offer an archival node for “cold storage” within 6 months.

One potential issue for the company in comparison to S3 is a lack of financial clout to match Amazon. Building out the storage node infrastructure will be an expensive proposition in comparison to creating software and running a typical data center, and so far, the company says it has received just $12 million in funding, some from venture capital firms and some from research grants. However, it also says 25 customers have already signed up for beta testing, and says one of those customers is supporting 50 million end users.

Base pricing for the service is 18 cents per stored gigabyte per month, a “slight premium” over Amazon’s price according to Harr. The company is hoping that it can increase sales volume and drive down the price.

Meanwhile, on the consumer/SMB side, a company called Intronis LLC is souping up its features in the hopes of gaining traction in the low end of the storage market. Version 3.0 of its eSureIT backup service will allow users to create a tapelike rotation scheme for files, creating backup sets and setting policies for data retention on a weekly, monthly and yearly basis. The company has added a plugin it calls Before and After, which will allow users to create scripts dictating what their computer systems should do before, during or after engaging with the Intronis service — for example, the script can have the user’s machine shut down applications prior to backup and restart them after backup has finished. Another new plugin will allow mailbox and message-level backups and restores of Exchange databases, and adds a text search for email repositories.

But the biggest new development, and the one that’s taken it the better part of two years to develop, according to Sam Gutmann, co-founder and CEO, is a feature the company is calling Intelliblox, which like other enterprise-level backup services such as Asigra, backs up only changed blocks over the wire. The feature uses a set of checksum and hashing algorithms to identify blocks and keep them together with their corresponding files (an existing feature of Intronis’s service is total separation between the company’s admins and users’ data — each user is given an encryption key to access its storage at Intronis’s data center, and Intronis says it has no way of reading any of its customer data).

This use of hashing algorithms also has this blogger wondering if they might also be able to offer fixed-content archiving down the road.


September 13, 2007  2:58 PM

The Bill Gates Dream House and your IT career

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

This post marks the beginning of a new category here at Storage Soup: Around the Watercooler, a catchall for the technology stories that have nothing to do with storage that we think you’ll find interesting anyway.

 Today there was a press release on the wires entitled “Using the Television to Wash Your Clothes.” According to the release,

The washing machine and the refrigerator are going to start “talking” to the television thanks to a new standard about to be published by the Geneva-based IEC. This new ability to network traditional household appliances with personal computers and audio-visual equipment will offer such possibilities as your television screen displaying the fact that the washing machine has finished washing your clothes or turning on an air conditioner from your personal computer.

One of my closest friends is a big-time tech geek, not only an IT guy in his own right but a sharp follower of consumer technology. The kind of guy who had a TiVo and a Netflix account years and years before they were cool or even well-known. You know the type–many of you out there in readerland are probably the same way.

This friend’s subscribes to the theory that the personal computer will one day become an appliance in the home like a water heater or an HVAC system–the computer itself, like one of those devices, would sit in the basement out of sight and out of mind to the homeowner (and like a water heater or air conditioning unit, would require specialized servicepeople to administer and fix). Also like a water heater or AC, the computer would connect to all the systems in the house, automating and personalizing every other appliance from the toaster to whatever screen / keyboard / hologram combination comes to represent an Web browsing portal.

My friend is the first person I heard describe this home of the future, but he definitely wasn’t first with the idea. In fact, there’s already a prototype of this type of home being lived in daily in our country, and it belongs to none other than Bill Gates.

Of course, Gates is a guy rich enough to have tropical sand barged in to Washington state for his private beach, so some of this is just Cribs-esque because-I-can excess. But Gates has also publicly announced his intention to market versions of his home technology for the masses. One example of technology that could fit into the water-heater computer of the future is described in a Wikipedia article: “visitors are surveyed upon entrance and are given a microchip that sends signals throughout the house to adjust temperature and other conditions according to preset user preferences.”

The ratification of a standard for this type of technology suggests that other people are thinking along the same lines. And in even as little as another half-decade, we could be looking back on the “digital home” of this era as child’s play.

Think, also, of the possible career opportunities this represents for those in the IT field.  Putting this much technology into the home could bring the IT guy out of his traditional data center, transforming him into the plumber of the 21st century. It’s already happening to some extent with businesses like Best Buy’s Geek Squad. And hey, before you scoff at that, I know some plumbers too, and they actually tend to make a very nice living.

P.S. Can’t miss pointing out just one more detail from the Wikipedia article on Gates’s house:

The number of building permits needed completely overwhelmed the Medina county clerk’s office, necessitating the move to a new Linux-based computer infrastructure to deal with the volume.


September 10, 2007  1:03 PM

NetApp vs. Sun debate rages on the Web

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In case you haven’t heard, NetApp has filed suit against Sun, saying Sun’s ZFS violates its patents. And it’s clear we’ve reached a new age in media when one high-profile company sues another, and a good portion of the sniping and posturing back and forth is coming from CEOs writing on corporate blogs, while having their words propagated and dissected via other Web 2.0 sources.

Dave Hitz, co-founder of NetApp, got things kicked off with his blog post Sept. 5, which was posted two minutes after NetApp’s official conference call began. Hitz’s blog post was also referenced in NetApp’s press call and in NetApp’s supporting materials for the announcement of the suit, and in many cases goes into greater detail than any of NetApp’s supporting documents save for the official copy of the complaints it filed in court. Hitz is also in a unique position to write about the case since he is a co-founder of the company, and one of the actual holders of some of the patents in question.   

After NetApp’s initial splash, though, Sun set tongues wagging anew late Wednesday when it responded to NetApp’s announcement with a counterattack of its own. Once again, there was a formal statement released through the usual channels, but CEO Jonathan Schwartz also posted at length on his blog, providing color similar to that submitted by Hitz.

Of course you know what’s coming next: the rebuttal by Hitz to Sun’s counterclaims.

At some point, the legal wrangling and public posturing go beyond what we in the tech-focused world can or really should puzzle out. Clearly, there are two mutually exclusive sides to the story here, or there wouldn’t be a court case.

But that hasn’t stopped the court of public opinion from swinging into action, and for good reason: the ultimate outcome for this case, while currently beyond anyone’s prediction, has implications for the storage industry from who owns snapshot technology to which product you, the storage user, will choose to deploy in your shop–ZFS, or a NetApp filer?You know what they say about opinions. And what’s most interesting about this fight’s transfer to the blogosphere is the freedom users, industry experts and even interested parties have to weigh in on the situation, either because it’s a less formal forum or because they can hide behind a pseudonym online.

A sampling of the debate begins, of course, with the commentary on the blog posts. “Why can’t you and Dave Hitz just sit down across a table with a couple of beers (and/or lawyers) and hash this out?” a commenter on Schwartz’s post asks. “Sniping at each other via your blogs isn’t going to impress any customers.”

Other observers are doubtful about Schwartz’s claims that he didn’t know about the suit until after NetApp made its announcement. “You cannot expect anyone to believe that, in your position, you were unaware of NetApp’s suit until a shareholder pointed it out during questioning at today’s analyst event…if it is true, it doesn’t speak well for communication within Sun. I’d hate to think you’d play us all for fools,” wrote Joseph Martins, analyst with the Data Mobility Group, also on Schwartz’s post.

Then there are the comments, largely taking place on other forums like Slashdot, which sketch out the primary positions on this case, since the claims by each company are so contradictory it’s not possible to find a middle view. “It seems as though NetApp was rather nice about this whole patent thing from the get go,” wrote a NetApp supporter on the Slashdot comment thread. “It wasn’t until Sun threatened them that they acted and again acted fairly preferring a cross licensing deal rather than any cash payout in either direction.”

“Sun guy [sic] contradicts himself,” writes another armchair litigant. “‘Never demanded anything’  and ‘always been willing to license’ do not fit together. Licensing means demanding fee. NetApp says they do not use technology covered in Sun patents, still Sun is ‘always willing to license’ it.”

Other readers, however, side with Sun. “In England,” writes another Slashdot user, “What NetApp appears to be doing is called shouting Get Your Tanks Off My Lawn.”

As the initial discussions died down, however, new ideas about the suit, the agendas on both sides, and its effect on the market have emerged. Questions being raised include: How can open source technologies be regulated? What is the ultimate relationship between proprietary and open source products in the industry? Is a patent suit the best way to address them? The result of these discussions has been the beginning of a backlash against both companies.

“The only people that get hurt [are] the consumer[s], who [have] to pay all these pathetic lawyers and their pathetic clients gazillions, either in protection money against this racket, or in court battles over ridiculous things like linked-list file systems and outrageously vague one-click patents,” writes one poster who calling themselves MightyMartian.

Another responds, ” I’ve been looking at NAS/SAN boxes, mainly the StoreVault S500, or the higher-end NetApp 270, or a lower end Sun StorageTek 52xx for my work…I hate patents, love ZFS, but not sure which one to order now! Guess I’ll have to give Equallogic another call…”

Now that the initial excitement has died down, I’ve begun to wonder if, for all the bluster, the end result will be a cross-licensing agreement between the two companies. Some previous alliances have been forged out of two companies lining up against one another, realizing what they have in common (including common enemies) and reaching an agreement.  The parties involved here certainly sound sincere, and it seems unlikely that if there were a way to resolve this privately that they wouldn’t have taken it public after 18 months of negotiation.

But the skeptical side of me definitely wouldn’t be surprised to learn that the end result of all this pomp and circumstance is that it has drummed up attention for the eventual partnership or even acquisition of IP between the companies. Whether or not that was the plan all along will only ever be known to a few people, and otherwise will be, like the rest of this case, in the eye of the beholder.


September 10, 2007  10:02 AM

Much ado about Microsoft VSS

Beth Pariseau Maggie Wright Profile: mwright16

When Microsoft Windows Server 2003 was released about five years ago, there was much ado about its new Volume Copy Shadow Service (VSS) framework. This feature allows administrators to take snapshots of Windows volumes and then restore data from the snapshots. At the time, Microsoft claimed that it provided the backup infrastructure for Microsoft Windows XP and Windows Server 2003 servers, but companies have to date seen only negligible benefits from this technology. That is about to change.

The three-sided triangle of providers, requestors and writers which comprise the VSS architecture are coming together to provide Windows administrators a powerful new alternative to back up and recover Microsoft servers using snapshot technology. This option is possibly as or more powerful than the much hyped virtual tape libraries (VTLs) and data deduplication technology and may be an option companies already own.

The first side of the VSS triangle, the provider, is the application component. Applications such as Microsoft Exchange and SQL Server now support external calls from third-party software to make a call to the application and acquiesce it. However, these calls only work if the second side of the VSS triangle, the requestor, supports it.

A requestor is an application such as backup software which controls the entire snapshot process which includes pausing the application, initiating the snapshot, restarting the application and then backing up the newly created snapshot. Most backup software products now support VSS and organizations may have this feature lying dormant in their backup software or can obtain it for an additional licensing fee.

The writer, which is the third side of the VSS triangle, actually generates the snapshot. Though snapshots can occur on Windows XP and Windows Server 2003, VSS compatible backup software can also initiate snapshot on storage systems from most vendors including incumbents like EMC, HDS, HP, IBM and NetApp or, with some scripting, from storage newcomers like Compellent, EqualLogic and Lefthand Networks.

Moving the creation of the snapshots from the Windows server to the storage system can also remove the server overhead normally associated with backup. Since the volume created by the snapshot can be presented to another server, the other server can then backup the data to tape.

The maturation of VSS technology is significant because, with all of hype about VTLs and the data deduplication, everyone seems to have forgotten about this low-cost or potentially free option that users may have available to them. Users willing to invest a little extra time to explore VSS may find that they can pay a fraction of the price of VTLs and data deduplication technology and achieve comparable or better results.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: