“You need HOW MUCH for storage?!” That question has been heard by many of us currently submitting budgets for the next calendar year, quickly followed by “Are you SURE you need that much disk? Didn’t we just get disk last year? Where did they all go!? I want your house audited. Now!”
Okay, maybe not the audit part, but for most of us, getting the type of disk we need in the quantity we need it is an uphill battle. Add SSD, deduplication, and longer-term retention to the mix, and things are getting a bit hairy with my budgetary requests. I’m at such a point now with a few of my smaller clients, and when they get that “you’re crazy” look, I bring up the chargeback model.
I think I just heard a collective sigh from the interwebs.
I understand both sides of the chargeback dilemma: the accounting side, that has to somehow keep track of all this without keeping track of all this; and the IT side, that is constantly being painted as the cost center only because no one is taking ownership of their parts of the “plumbing.” People (read departments) will request outrageous resources when they don’t have to directly foot the bill. That part I get, but are they so vehemently against accounting for their infrastructure usage?
In my opinion, chargeback would actually lead to better data management habits — at least in the long term — because if you have to pay for everything out of your own budget, then you’ll be more careful about separating what you need from what you want. How many of our managers and accounting folks have processes in place to account for each department’s use of the “utilities” that make up IT and understand that IT isn’t the root of all expenses?
I had an energetic debate with a co-worker about this very issue. I took the stance that chargeback is the way to go. He offered a more community-oriented accounting method. We went back and forth, point and counterpoint, until concluding that it just depends on what your business environment will support and the level of organization that business has in place.
For instance, if you have a well-organized, project-oriented IT environment, and have a project portfolio ready for sizing, you can plan a community budget very well and effectively fund addition to your infrastructure through a single IT budget. The reality from <i>my</I> experience (read, SMB clients) is that most companies are not so well-organized, don’t have a project portfolio for the next 12 months, and will not be able to identify budgetary requirements for infrastructure improvements.
In these cases, chargeback (or, at the very least, departmental accounting) is key to being able to answer my opening question with confidence.
Traditional SAN storage may be easy to bill for, but what of virtualized storage? Take it a step further, how about Softricity/Microsoft’s Softgrid? (Softricity is the company Microsoft acquired not too long ago that allows for application-level virtualization as opposed to host virtualization.) How do you quantify and itemize a streamed, virtualized application?
Then there’s the question floating just below the surface of the chargeback debate: How do I, as a department, know you are giving me what I’m being “billed” for? That question opens a giant can of worms in my mind (and there are already creepy crawlies up there, no need to add worms to the mix).
The crux of what I’m getting at is: Are we as technologists — and storage pros specifically — asking for too much or too little when it comes to chargeback? Are there still companies out there that don’t see the light when it comes to chargeback and departmental accounting. Should we as storage pros be leading the way for other areas of IT to follow our example?
Brocade’s earnings report today made it clear it is gaining market share in the Fibre Channel switch competition against Cisco.
No surprise there. Cisco’s earnings last week disclosed its SAN director and switch revenue dropped 14 percent year over year. Brocade today said its switch revenue grew 5 percent and director revenue was up 3 percent over last year.
Why is Brocade picking up steam? Is it because it beat Cisco to the market with 8Gbps directors and switches? Do customers prefer Brocade’s next-gen data center DCX Backbone over Cisco’s Nexus switches? Or are storage vendors – perhaps stung by perceptions that Nexus is not OEM-friendly – pushing Brocade over Cisco?
A case can be made for all three, although it’s probably too early in the DCX and Nexus product cycles to know how much any of them come into play. Brocade will have 8-gig products out for close to a year before Cisco pushes out its first 8-gig MDS directors late this year. Even Cisco’s data center solutions marketing manager Deepak Munjal told my colleague Beth Pariseau this week that Cisco lost some business from “users who absolutely need 8 Gigabit Fibre Channel” and got it from Brocade.
Brocade execs say 8-gig products are still a small minority of its sales, although its data center infrastructure division GM Ian Whiting said 8-gig is showing up more with customers building new data centers around the DCX Backbone. Brocade’s strong services results – up 43 percent year over year – are also due in large part to customers designing new data centers around the DCX, Whiting said.
Analysts asked Brocade CEO Mike Klayko on today’s earnings call about storage vendors preferring Brocade over Cisco now. He downplayed that notion, saying “I don’t think there’s a concerted effort of OEMs to go one way or other. I think we have best solution in the marketplace.”
But not everybody is sure. In a note to clients earlier this week, analyst Kaushik Roy of Pacific Growth Equities wrote that Brocade is gaining market share and “we believe that is partly because of Cisco’s lack of 8-gig blades and possibly because EMC is favoring Brocade over Cisco at this time.”
Whiting said Brocade is less likely to compete with storage vendors than Cisco. “Our role in the industry is enabling solutions like virtualization, encryption and other services,” he said. “Cisco’s vision is they expect to deliver those solutions themselves. That creates conflict among major players like the IBMs and Hewlett-Packards of the world.”
Brocade said it saw little revenue last quarter from its HBAs, which are still being qualified by storage vendors.
Overall, Brocade’s $365.7 million revenue last quarter increased 12 percent from last year.
Don’t you hate it when you go into iTunes and try to play a purchased song, and it won’t let you until it authenticates…again? Well, take away the authentication step and leave things hanging, multiply it by hundreds of enterprise virtual servers that cost far more than $0.99 each, and you’ve got what some IT managers have been experiencing today.
Thanks to an issue with VMware’s licensing code, VMs fail to power on, fail to leave suspend mode, or fail to migrate with VMotion – meaning those with HA/DR or other applications that require frequent migration of servers are having particular trouble.
According to VMware’s official site,
An issue with ESX/ESXi 3.5 Update 2 causes the product license to expire on August 12, 2008. VMware engineering has isolated the root cause of this issue and will reissue the various upgrade media including the ESX 3.5 Update 2 ISO, ESXi 3.5 Update 2 ISO, ESX 3.5 Update 2 upgrade tar and zip files by noon, PST on August 13. These will be available from the page: http://www.vmware.com/download/vi. Until then, VMware advises against upgrading to ESX/ESXi 3.5 Update 2.
“Noon PST” August 13 – around 36 hours after the bug first struck today. Needless to say, VMware users are not happy.
My fellow blogger here on the Soup, Tory Skyers, hasn’t installed update 2 yet, but the glitch worries him nonetheless. “With all the headaches for VMware over the last couple of months, like the CEO leaving, I’m flabbergasted that they of all people would let something like this happen,” he told me today. For Skyers, this kind of bug was worse than flawed code. “It’s not really a bug per se – the license expiration is supposed to do what it’s doing, just not in these circumstances. This is a QA issue,” he said.
Update: Just received a statement from VMware PR saying that while a fully-tested patch will still have to wait until tomorrow noon (PT), an “express patch” for production servers will be made available today. Let’s hope the express update is also bug-free!
This story isn’t specifically about storage vendors – it’s about airline carrier JetBlue – but I think it’s a great example of the power of the Internet and populist publishing when it comes to getting the news of your experience with a vendor across to other potential customers and etting that company to respond to you.
Bill Baker was trying to travel home on JetBlue when his flight was cancelled. He said he wasn’t so upset about the cancellation as the manner in which it was handled – no refunds, no sleeping accomodations, no agreements with other carriers to put passengers on other outbound flights.
Unfortunately, what JetBlue didn’t know about this particular passenger is that he works as a technology publicist in Connecticut, and his response was to do, I’m sure, the one thing that would have gotten the undivided attention of his own clients: tell everyone on the Internet about his bad experience.
As CNet’s Charles Cooper put it, “We’re long past the era when companies could cavalierly screw over their customers without risking public humiliation…see, there’s this thing called the Internet…”
Maybe this is true when it comes to consumer companies, but this has not been my experience at all when it comes to customers of enterprise products. You’d think that the comparitively large size of both products and price tags would make the enterprise a market even more rife with publicly-aired criticisms and calls to action, but you’d be wrong.
I find many enterprise customers fearful of speaking publicly, especially about their vendor. I understand there are big bucks and politics at stake in enterprise capital equipment purchases, but it saddens me when users act like they work for the company that sells them storage products rather than the other way around. Especially when users trade daily, arduous efforts to manage unmanageable products or problems for keeping the political peace. It would be great for the industry, I think, if more storage users started blogs like Baker’s.
Rackspace Holdings became the first company to complete an IPO since March when it priced its shares at $12.50 Thursday (the price fell to $10.01 Friday on the first day of trading).
The hosting company doesn’t specialize in storage, but is beta testing its CloudFS Web-based storage service that competes with Amazon S3 and others. Rackspace also launched a dedicated NAS (DNAS) storage service for in June aimed primarily at file sharing Web sites and companies that deal with rich media files. That’s built on NetApp FAS2000 storage, and Rackspace has an Unmetered Backup service trough a partnership with CommVault.
Only a small piece of Rackspace’s revenue ($130.8 million last quarter and $362 million in 2007) come from storage products, and it could be tough growing in the storage space with competition heating up.
“Storage is tough [for hosting companies],” Forrester Research analyst Stepanie Balaouras said. “If it’s traditional transaction oriented processing, the storage really needs to be in close physical proximity to the servers for performance reasons. If it’s non-transaction processing — you need to store Web 2.0 type of content like photos, audio, and video — you’re less performance sensitive and you can rely on storage services in the ‘cloud.’ But a lot of startups who need this type of storage service seem to be going with Amazon S3.”
Considering Amazon’s track record for outages, maybe Rackspace is getting into storage at the right time.
While EMC generally received high marks in the storage industry for brining thin provisioning, MAID, and solid state disk into its midrange arrays, the CEOs of smaller competitors 3Par and Compellent say they’re not worried about the new Clariion CX-4 features stunting their growth.
3PAR and Compellent completed IPOs last year, and have continued to grow in their early quarters as public companies. 3PAR increased revenues 80 percent last quarter and turned a small profit ($2 million) for the first time. Compellent revenues grew 74 percent since last year and it narrowed its losses to $603,000 – down from $1.9 million last year.
Compellent and 3PAR likely owe some of their success to beating the big guys to the punch with thin provisioning and other features. Now EMC comes along with thin provisioning and raises the bar in the midrange with SSD support and gets the jump with 8-gig FC connectivity. But Compellent CEO Phil Soran points out his company’s Storage Center systems have supported thin provisioning since 2004. He said Compellent has even offered SSDs in “special bid” custom installations.
Soran said EMC’s CX-4 “shows that others have been leading in the innovation space for many years now. It wasn’t long ago that EMC said they don’t recommend thin provisioning, and we’ve had it for almost five years.”
Thin provisioning pioneer 3PAR has had it in its InServ systems since 2002. Now thin provisioning has popped up in Hitachi Data Systems’ USP-V platform, EMC’s SAN and NAS boxes, and IBM’s San Volume Controller (SVC) over the last year. IBM also offers thin provisioning in the XIV systems it will rebrand in the coming months.
But 3PAR CEO Dave Scott says not all thin provision is created equal. When asked during his company’s earnings call last week what impact he thought EMC’s virtual provisioning would have on InServ sales, Scott said “not a lot. It’s becoming clear to customers who do any analysis of the [thin provisioning] capabilities that EMC and Hitachi put together know they don’t reflect the kind of ease of use and automation necessary to implement this efficiently. You end up with what we call chubby provisioning, that is, it enhances operational overhead risk.”
Soran also criticized EMC for have a “model-based architecture” with disparate midrange SAN, enterprise SAN, and NAS systems while competitors such as Compellent, 3PAR and NetApp have one model that doesn’t require rip and replace upgrades. EMC claims that is one path it will not follow its rivals down, although some analysts suspect it might.
Google has revamped the hardware and some aspects of the software for the Google Search Appliance, which can now support 10 million files in a single box, up from 3 million in the previous version.
Google has also added new biasing features which allow corporate administrators to weight search results differently for different end users – for example, the marketing and engineering departments might get different documents returned first for the same keyword. This is termed front-end biasing.
Metadata biasing is also new with this release. That lets admins rank metadata such as create date or author on a sliding scale of importance, so that, for example, documents written by the CEO are returned first.
Data growth is making indexing and search necessary, according to Forrester Research analyst Stephanie Balaouras. “I would say, once you can measure your storage in hundreds of TBs,” data indexing will probably be necessary, she said.
“Below 100 TBs, you should have a rough understanding of how that data is broken down between email, unstructured files, and database data,” she added. “And with some basic policies, the most important information should be ingested into a content management system.”
Former Brocade outside counsel Wilson Sonsini Goodrich & Rosati has agreed to pay Brocade $9.5 million to settle a lawsuit the company is bringing against 11 of its former board members and executives in an attempt to recoup what it says are $830 million in costs relating to options backdating charges against former CEO Greg Reyes. The news of the settlement was first reported today by The Recorder.
The original report also shows how Brocade arrived at the $830 million figure:
The voluminous complaint details what the backdating scandal cost Brocade, including $160 million to settle the securities class action, and $7 million to settle SEC charges. The company spent about $7.5 million on its two internal investigations and another $67 million defending Reyes, Jensen and other former executives and $30 million defending itself.
The complaint also said Brocade lost $470 million when the backdating scandal dashed a proposed merger with Cisco.
According to law.com, Wilson Sonsini were named defendants in an earlier derivative lawsuit filed in California’s Northern District court in April, which has been consolidated into the new suit filed last Friday and reported by the Recorder. The original complaint relating to that case cites a 2006 piece by Business Week reporter Peter Burrows, where the revelation of the scuttled Cisco acquisition was attributed to Reyes.
“A successful sale of the Company potentially could have expunged defendants of their liability to Brocade for the stock option manipulation then under investigation by the Audit Committee,” according to the April court filing.
For those of us out here watching with popcorn, it’s fun to think about how different things might have been if Cisco did buy Brocade back then. Today, Cisco would likely be selling Brocade switches as its midrange platform and McData would probably still be foundering on its own.
On EMC’s conference call this morning officially launching the CX4, storage division president Dave Donatelli told analysts that SSDs will also be qualified for Celerra NAS systems over the next six months. The CX4-960 will support SSDs in October, and Symmetrix DMX systems are already available with SSDs.
Despite more and more common features, Donatelli said speculation about a merger of product lines given the multiple overlaps between Celerra, Clariion and Symmetrix is “due to a profound lack of understanding” in the market. “The Symmetrix has 24 boards,” he said. “If you lose a whole board, you lose 1/24th of your performance.” Clariion is a two-board system.
But that hasn’t stopped people from asking this question of EMC. Industry analysts, Wall Street analysts, users…especially the users – a few told me they weren’t sure which way to go between some EMC platforms going forward. A Wall Street analyst asked Donatelli on the conference call today about whether EMC plans to consolidate software and hardware platforms “the way NetApp has.” That had to hurt.
There have been rumors about the CX and Symmetrix DMX merging, communicating, and generally coming together, for years. I remember one rumor that the CX and DMX would be able to share data with the release of ECC 6. Obviously that one never came to fruition. But EMC did standardize hardware components such as disk shelves between the CX and DMX for manufacturing efficiencies a few years ago. And the capacity points, replication and management features have been getting closer and closer together.
“It’s a running joke, every time we update Symmetrix, we get the question about what’s going to happen to Clariion, and every time we update Clariion, we ask what’s going to happen to Symmetrix. We keep making Clariion bigger and Symmetrix smaller,” Donatelli said. But he didn’t say why.
That’s the thing. EMC’s not going to come out and say, “Yes, we’re merging our primary storage products the same way we’re planning to merge our backup products, so please don’t buy anything we have out currently in anticipation of something to come.” So no matter how vociferous their denials that this might be the case, for some in the industry, their actions are speaking louder than their words.
A couple of months ago, my co-blogger Tory Skyers wrote a post questioning the impact of data deduplication on evidence preservation and chain of custody best practices for e-discovery data.
It’s a question many users are asking, apparently, as “Mr. W. Backup”, GlassHouse Technologies vice president of data protection services W. Curtis Preston recently addressed it on his Mr. Backup Blog. First, Curtis brings up the fact that if today’s legal standards for electronic evidence considered dedupe/compression a change in the data rendering it inadmissable, data from tapes (naturally compressed) wouldn’t be admissible either.
But Curtis also brings up another interesting point, and this is where I think the e-discovery waters have been muddied by everyone and their brother positioning products for that space. He writes:
You have to address the entire chain of custody. Let me give an example. If every email that is sent or received by an email system is immediately archived and stored in an archiving system that can demonstrated for anyone concerned when/where an email came from and how long it has been stored, you could use that system to build a non-repudiatable source of data that could be used in legal proceedings. (It’s not just about the software, of course, as you have to address access and all other kinds of issues, but that would be a start.) BUT, IMHO, non-repudiation requirements have much more to do with proving chain of custody than they do with the content of the data, and dedupe systems are just as good at proving that as any other storage system — [in other words] they don’t. It’s usually up to the system that put the data in there and took it out.
I think Preston raises many good points, but this doesn’t negate some of the points Skyers also raised. Among the biggest: “Are we sure our legislators understand the differences between a zip (lossless) and JPEG (lossy) compression?…The answer to these questions, while second nature for us technology folks, may not so second nature for the people deciding court cases.”
If IT pros are mulling and chewing over this question, you can be sure lawyers are, too. And out of the pool of citizens that could make up a jury of your peers, how many would immediately understand Preston’s paragraph above?