October 11, 2007 8:19 AM
Posted by: Beth Pariseau
Storage managed service providers
Responding to pressure from newcomers to the storage SaaS market, Amazon announced yesterday that its S3 online storage service will offer a monthly uptime SLA of 99.9%, effective from Oct. 1.
This matches an offer first made by new online storage service player Nirvanix, which came out of stealth last month with its own 99.9% SLA, which at the time Amazon did not match.
It will also come as good news for users of the service, who in the past have complained of unplanned outages and performance issues with the service.
October 10, 2007 3:35 PM
Posted by: Dave Raffo
Data deduplication has been much discussed in the data center for the past year or so, and now is gaining attention in legal circles.
Quantum announced Tuesday that it filed suit against Riverbed in U.S. District Court in the Northern District of California, claiming the wide area file services vendor is infringing on a deduplication patent originally granted to Rocksoft in 1999. Quantum gained the patent when it bought ADIC in May of 2006, two months after ADIC acquired Rocksoft.
Quantum is seeking to stop Riverbed from using the technology in its WAFS devices, while seeking damages, attorney fees and other costs.
Quantum general counsel Shawn Hall said in a prepared statement that the suit was filed eight months after the backup vendor first approached Riverbed. “Unfortunately, this effort has been unsuccessful, and we felt we had no choice but to initiate action to protect our intellectual property in data deduplication,” he said.
Riverbed answered that it has done nothing wrong — at least not intentionally. “Riverbed’s policy is to respect the intellectual property rights of all third parties,” the company said in a statement. “Riverbed has no factual or other basis to believe that it infringes the patents of any third party, including Quantum. Riverbed intends to timely respond to Quantum’s claims and will defend itself in this action to ensure that its rights are fully protected.”
Quantum claims it has confidential licensing agreements with other vendors regarding its deduplication patent. It also has a cross-licensing agreement with rival Data Domain that became public knowledge earlier this year. As part of that agreement, Data Domain issued Quantum stock shares that were worth $5,850,000 when Data Domain went public in June. Quantum sold $2.1 million of shares during the second quarter.
Quantum considers data deduplication a key technology in helping it expand its backup products from tape to disk. Quantum began selling disk backup devices with deduplication last January.
“The patent gives us a strong leadership position that we want to protect,” Quantum spokesman Brad Cohen said.
This is just a hunch, but the feeling here is you can expect the Riverbed suit to end with the WAFS vendor and Quantum reaching a not-so-confidential agreement to settle the issue.
October 10, 2007 2:59 PM
Posted by: Beth Pariseau
Strategic storage vendors
The “Somebody’s gonna buy EMC!” talk is rearing its head again recently in the marketplace. This is a perennial conversation. In fact, we already covered it…two years ago.
This time, though, there’s a new wrinkle, highlighted in this very interesting post by a financial blogger at Barron’s. (Never thought I’d use the words “interesting” and “financial” in the same sentence!)
The picture painted by the Barron’s blogger is one in which the cart is running away with the horse–EMC’s “subsidiary” is now valued at almost the same level as EMC itself, and its stock is trading at 5 times the price of EMC’s (which only recently broke the $20 threshold). The suggestion is also that prospective buyers might be looking at the subsidiary, not the core company–VMware is literally changing the world right now, and EMC continues to be distracted by integrating its many, many acquisitions.
Then again, what kind of valuation can be put on VMware right now? What do you suppose the asking price would be? Would any company, even Cisco, be able to afford it?
Ah yes, there’s that “C” word again. Cisco has long been pointed out as the most likely potential suitor for EMC. They’re big enough, they’ve got products that go hand-in-glove with EMC’s, and right now, they’re pushing their data center management software, VFrame, as well as its integration with VMware, pretty hard. Meanwhile, throughout the data center, server virtualization is the name of the game, and right now VMware’s the only game in town.
Which means its highly unlikely EMC would part with it, or sell themselves out and lose control of an all-time cash cow just as it begins to bear serious fruit. But you never say never–and as Barron’s pointed out, buying the whole farm might be worth it just for that one miraculous bovine.
October 10, 2007 2:32 PM
Posted by: mwright16
Data storage management
Successful patterns of behavior are repeated. That adage is as good a reason as any as to why storage managers are reluctant to change their storage buying or management practices. Yet fundamental changes in how underlying data storage technologies work are forcing a subsequent change in storage management and procurement. Now is the time of year to lay the foundation for those changes.
The fourth quarter is typically when storage managers plan their budgets for 2008, but classifying new storage products is anything but cut and dry. The days of using budget categories like “backup software,” “disk” and “tape” are coming to an end as continuous data protection (CDP), data protection and recovery management (DPRM) software, disk cartridges, iSCSI storage systems and storage virtualization emerge. These technologies don’t quite fit into the tidy budget categories that storage managers have used over the years.
Storage managers are re-thinking and re-wording budget categories so category descriptions can be more inclusive of new storage technologies. For instance, ”backup software” and “tape” might become “data protection software” and “data protection hardware,” respectively, while the “disk” category may be described as a “storage network.” Simple wording changes like these can help storage managers prepare their management teams for the fact that new storage technologies are coming.
Bringing new storage technologies into a company is never an easy task and the larger the company, the more difficult it becomes. However, sticking to storage technologies that have worked in the past is increasingly the wrong way to manage storage. Using new storage technologies, companies stand to get more mileage out of their storage while becoming more efficient in how they manage it. Examining and changing the wording in your budget is a simple way to start the process of change without putting either yourself, or your company, at undue risk.
October 10, 2007 2:04 PM
Posted by: mwright16
The evolution of the use of continuous data protection in companies is taking shape. BakBone Software’s inclusion of CDP as a new feature in its NetVault:Backup 8.0 release puts it in the growing number of products such as Asigra’s TeleVaulting and InMage Systems DR-Scout that use CDP to protect Windows and Linux servers.
The rationale for including CDP in backup is simple. Easy backup and recovery of standalone Linux and Windows servers remains a significant challenge for administrators. Companies still have too many of this class of servers with too few administrators, who are struggling to provide a cost-effective means to backup and recover this class of servers.
Using CDP as part of the backup client addresses this issue on several fronts. It replicates data to disk locally and remotely; it provides for fast point-in-time recoveries at any past point-in-time (typically 3 – 30 days); and by creating and keeping a complete copy of the data on disk on another host, administrators can manipulate this copy of data in multiple ways.
October 10, 2007 1:16 PM
Posted by: Tskyers
, Storage tips
My name is Tory Skyers. Through circumstances not entirely beyond my control (!!) I have been deeply involved throughout my career in various types of centralized and distributed storage. Now, at the end of a long chain of events beginning in Long Beach, CA with Curtis Preston and some blinky magnets (I’ll let you use your imagination), I’ve been offered an opportunity to share some of my experiences and insight with you.
I’ve been agonizing for a week on what to say for my first Blog. I think it has to be earth-shatteringly profound, so of all the catchphrases and tag lines I came up with, this seemed to sum it all up best:
Hello and thanks for reading my Blog.
What do you think? Just imagine a guy smiling ear to ear and waving at you from behind his keyboard .
An admission: I’m absolutely fascinated by storage. The technology that goes into connecting computers and people to storage today was the stuff of science fiction 20 years ago. Pause for a second and take a look at where we are in storage: 1 TB hard drives, 55GB optical discs, 10Gb Ethernet, 4Gbps Fibre Channel, 3 millisecond seek times, 300MBps throughput… all these numbers add up to wow, at least to me. When I think about all the technology out there, I feel like that kid at the toy store window with my eyes the size of saucers, staring at the GI Joe with the Kung-Fu grip, and the Spiderman Hotwheels set.
Here in my little corner of cyberspace I’ll be blogging about some of those stare-inducing storage technologies from my perspective, which is that of a network administrator (and according to friends is sometimes “warped and twisted” by my own particular brand of logic. I’ll also be touching on the ennui (SAT word I’ve been dying to use in a sentence) that I see creeping into the market. Check back from time to time and let me know if you agree with me. (And dig out that old SAT prep book while you’re at it–send me a word or two I’ll see if I can roll it in.)
One last thing: I gave a presentation on mobile storage at the recent Storage Decisions show in New York, and at the end of the presentation I mentioned a few scripts I wanted to share with the attendees. Below is a copy-and-paste of a simple script using ADfind from Joeware.net to archive users’ home directories that no longer have Active Directory accounts. This script can certainly be more elegant so feel free to expand, expound and extend. There are a few things on the “to-do” list for it: first, make it self-contained and not need an input file (i.e., do the AD query using ADODB or something similar). Second, provide logic to validate permutations of a username or directory. Third, be a pretty HTA (HTML Application). I’m working on migrating this script to Powershell.
The code is below the jump. Copy it out using notepad (not wordpad) or some script editor and save it as a .vbs. Run it from the command line with an input text file with one username per line. You’ll need to insert specifics for your environment like domain names, etc.
Again, thanks for the read! Continued »
October 4, 2007 3:33 PM
Posted by: Beth Pariseau
, Strategic storage vendors
Sun’s campus in Burlington, Mass. Photo by Beth Pariseau.
Today found me at Sun’s campus in Burlington, Mass., one of three cities across the world where Sun was holding “virtualization chalk talks” with press–the others were San Francisco and London.
Why this big fanfare? Sun’s coming out with its own version of a server virtualization hypervisor based on open-source code from Xen and its LDOM “container” offering for UltraSparc servers. Rumors have been circulating about it recently and Sun wanted to clarify its vision.
So what does this have to do with storage? My question exactly. The answer: at this point, probably something, but Sun’s a little light on the details. ZFS was mentioned, of course, as the underlying filesystem for virtual machines running on its xVM Server.
One of the VPs presenting today, Sun’s Connected Systems group leader Steve Wilson, also pointed to the example of the Texas Advanced Computing Center (TACC), which is running a 4000-node IBM blade server farm attached to Sun’s big honkin’ Magnum InfiniBand switch (3456 ports) and a grid of (guess what) “Thumper” SunFire X4500 servers for storage.
This all ties in–vaguely, at this point–with the announcement from Sun on Monday that it’s melding its server and storage groups. Sun’s futuristic vision is that server processors and Ethernet pipes are catching up to traditional storage subsystems and Fibre Channel fabrics in terms of performance and scalability. It’s essentially a different twist on Sun’s “the network is the computer” mantra that suggests “the server is the SAN.”
Meanwhile, Sun is announcing a software product to go along with xVM called Operations Center, which will manage both physical and virtual server systems in one console. Eventually, when the lion lies down with the lamb, and grid data centers run in server / storage peace and harmony, said software (theoretically) will also manage the Thumper grids everybody’s going to be using for storage.
That’s the idea, anyway. The roadmap is, to put it mildly, unclear. Sun spent most of its time today talking about its new server virtualization technology, and aside from ZFS and Thumper, no specific storage products or services were mentioned, nor was any time frame given for the extension of Operations Center to storage systems, or when or if Sun’s virtualization vision will extend to incorporate traditional storage subsystems.
As always, when it comes to Sun and storage, ZFS is the glue, the linchpin, the centerpiece and the mantra. ZFS is what makes Thumper tick, it’ll be the file system underpinnings for Sun’s virtual servers, and it apparently will also be key to this “data center of the future” Sun is planning out.
But there’s an elephant in the room. “What happens,” asked this reporter of Sun’s officials in Burlington, “if NetApp wins its lawsuit?” (For those of you unaware, NetApp has filed a cease-and-desist lawsuit against Sun claiming ZFS violates NetApp’s NAS patents. The two may eventually drop their posturing and come to some sort of cross-licensing agreement, but as things stand right now, a NetApp win means not a licensing deal but a mandate to stop distributing ZFS altogether, period.)
You could’ve heard a pin drop when the question was asked today. Finally, the response was, ”We are not going to talk about ongoing litigation.”
Reading between the lines, though, it appears that the rest of Sun is following along with Jonathan Schwartz’s somewhat nonchalant attitude toward the lawsuit–the message that seems to be coming from all this is that Sun’s confidence in its legal position is such that it doesn’t feel compelled to hold off one bit with ZFS. But at the same time, it does beg the question of what could happen as they move forward with a storage product strategy that is so heavily dependent on a disputed piece of IP.
September 21, 2007 8:40 AM
Posted by: mwright16
Data storage management
, Storage backup
CDP and DPRM software are relative storage newcomers, but they may be the software that finally delivers on the promises of their SRM and storage virtualization software predecessors.
Storage resource management (SRM) and storage virtualization software have taken their turns sharing the storage spotlight over the past few years but have, for the most part, largely failed to deliver on their promise. Though companies may use them in some tactical way, such as doing LUN masking, fabric zoning or data migrations, neither has really delivered the simplified, automated storage management environments that vendors promised and customers hoped they would.
Working for a company that tried both, my company saw the strategic value that both SRM and storage virtualization software could deliver but never could figure out a way to turn that promise into a reality. For when push came to shove, it became almost impossible to find a risk-averse and profitable way to transition from Excel-based FC SAN management to SAN management based on the use of these two software tools.
What my company needed, and what is still needed, is a method to segue from FC SANs managed by Excel spreadsheets to the introduction of SRM and storage virtualization software without a rip-and-replace strategy. So, it was while I was evaluating the latest generations of data protection and recovery management (DPRM) software and continuous data protection (CDP) software that I may have stumbled across a way for companies to make this transition.
Companies usually bring DPRM software in-house to report on the success and failures of backup jobs. Though DPRM software still does that, DPRM software is quickly expanding to monitor and report on other components of the backup infrastructure, including server performance, fabric switches and virtual and physical tape libraries. Though the impetus for offering these features is to better troubleshoot systemic problems in the backup infrastructure as well as do capacity planning, companies are inadvertently using DPRM software in much the same way SRM software was intended.
A similar pattern is emerging with CDP software at the high-end, with products such as EMC’s RecoverPoint, HP’s CIC and Symantec’s CDP/R. These CDP software appliances install into FC SAN fabrics and operate just like the original FC SAN-based storage virtualization products except CDP appliances journal all writes and are only used when production storage fails. But other than these characteristics, they are essentially the same as the original FC SAN-based storage virtualization appliances.
The reason users are now willing to introduce either CDP or DPRM software into their production environments is that they no longer feel like they are risking their production applications or stretching their budgets for products whose value proposition is dubious. CDP and DPRM products solve immediate corporate pain points, are justified with existing dollars and require less risk – a win for both the vendors and the users.
Now the question is, will CDP and DPRM software eventually evolve to assume responsibilities that their SRM and storage virtualization software predecessors never really delivered on in the minds of customers? My guess is yes.
September 20, 2007 12:15 PM
Posted by: Ndamour
Data storage management
, storage technology research
, Strategic storage vendors
Not a day goes by that I do not hear from yet another storage or server vendor that their offering, whatever it is, is green. This mania started about a year ago in earnest. Prior to that, the green movement was pretty much restricted to organizations outside of the computer industry. So, what is really going on? What has caused every software and hardware company to suddenly formulate a green message.
I have a theory, and you heard it here first. I think there is a fundamental grassroots movement towards green that has started in the U.S. and it is picking up momentum like no other I have seen in thirty years. This moment is more powerful than the Presidential elections and other important matters facing the country. It is bigger than Exxon and Mobile. It is bigger than GM and Ford. For years, the debate has been raging about global warming. No matter which side of the debate you place yourself, the green movement has begun. And because now it is becoming fashionable, every company in every industry will feel the need to do something “green.” I believe we are now in the phase 1 of this movement. In phase 1, each company takes stock of what they have in their product line and extracts what they can of a green message. Granted most, if not all of these companies had never thought of any of their products in terms of green before. Not in product development and not in marketing. Of course, good design practices prevailed and many resulted in lower power usage or smaller packaging but they were hardly ever viewed within the context of green. So, in Phase 1, what we are seeing is a recasting of the company message incorporating green.
I see it every day. Sometimes I laugh when I see a storage company twisting and turning its message to incorporate green. Even to the point that more than one company has stated to me that they are so green that even their logo has green in it. Give me a break. The logo was done years ago when green was equated to the color of a person’s face when they saw a ghost.
But I frankly don’t care.
I am thrilled just to see the storage companies participate in the green movement. So what if 75% of what I see today is recasting of a message relating to an older product. So what if Manhattan’s energy and space crunch started the ball rolling. I think once the company is committed to the green message they will design their next product accordingly. They can’t escape it. That is why I believe the green movement will have a genuine impact in the next five years. Let the companies play the game. Play along with them. Give them slack for now. Because once they are in, they are in. I love it.
Before you think that there is nothing real in the products today let me restate something. There are technologies that have hit the market in the past three years that are making a serious green impact. Data deduplication is one such technology. It has hit the market on the secondary storage side first, that is, applied to backup/restore and archiving markets. When used in appropriate ways, one can reduce the amount of disk required by a factor of 20. No matter which way you look at it, 1 TB of storage uses a lot less power and requires a lot less cooling than 20 TB. Thin provisioning is another good example. I chose these examples to illustrate a point: it is not simply hardware technologies that deliver green. In fact, at Taneja Group we believe software will play a huge part in the greening of storage and servers. Not to say that hardware wouldn’t play a role. Look at Copan’s MAID technology, for instance. Or IBM and HP’s blade server technology. New techniques for airflow through racks, nanotechnology and new data center designs will all contribute. But, we believe the impact of software technologies will dominate, especially with installed hardware.
Green will soon become a competitive advantage. Because of the financial implications, real change will occur. Soon even the Mobiles and the Exxons will have to yield to the pressure. That is how strong grassroots movements are. I believe the time is here. And I couldn’t be happier.
Note: Recently Taneja Group wrote a Technology In Depth paper on this topic. If you would like a copy please send a request through www.tanejagroup.com.