Everyone and their brother has an email archiving story to tell you these days, or so it seems. But Forrester Research analyst Jo Maitland told Forrester clients in a teleconference titled “Email Archiving Mistakes to Avoid” to keep things simple in their selection of a product and setting of policies.
Users need to begin with a strategy that addresses backup and archiving separately (apparently not everyone in the storage industry read Mr W. Backup’s definitive “Backups are not Archives” article a couple years ago…). Then, they should take into account their requirements for the deployment – whether it will be for end user restore/Exchange optimization, or for legal discovery.
According to Maitland, this is the most crucial step in determining which product will work best in a given environment, and one not everyone clearly understands. This isn’t helped by an overcrowded market with vendors trying to shout over each other with ever-more-complex features, but Maitland boiled it down to a few key things. An archive for e-Discovery should mark data for legal hold and notify an administrator when new content hits an existing search; those seeking an archive for legal discovery should also try to look for one that covers more data types than just email.
For email optimization and end user restore, the product should allow access to emails via a Web browser, automatically copy messages to the archive and delete them from primary storage (too many stub files can still clog up the mail server), and allow simple retrieval back to the inbox.
The two purposes for an archive - eDiscovery and end user restore – can be mutually exclusive, Maitland said.
Once the requirements are determined, Maitland advised that policies be set – and once again, kept as simple as possible. “Nirvana policies are not practical,” she said. If policies are too strict or too lax, she pointed out, “everybody ignores the policy and finds underground ways of keeping their data anyway.” A 30-day deletion policy, moreover, “flies in the face of 10 years of best practices in records management,” and can still expose a company to risk when it needs some data to defend itself. But keeping data forever quickly overwhelms today’s search and indexing tools.
While policy-setting is still an area of unavoidable complexity, Maitland also emphasized that users won’t necessarily need all the features in every archiving and e-Discovery product. WORM, for example “is overkill for most things.” Instead, if a company really needs WORM for a subset of data, she advocated a tiered strategy where only the data that really needs WORM protection is migrated and stored on a WORM system.
So, yes, with this type of tiered approach, it means ongoing management, something Maitland said admins often overlook when planning an archiving strategy. “With archiving today it can’t be just plug it in and forget it,” Maitland said. “Email archiving is a strategic project, not just a quick fix to manage performance or service levels – it aims to manage information for the long term.”
EMC blogger Storagezilla posted an interesting Flash animated video this morning about Maui, titled CloudFellas, in a post that has since been whacked. In the original post, ‘zilla alluded to ‘getting too far out in front of the boss’, so maybe that’s what happened (the post has been deleted from Google’s cache as well).
The video showed fun little animations about the spread of data to points around the world, and gave the example of a movie project where dailies from the set have to be sent to production houses for editing, then from the production houses back to the studio for vetting, and eventually out to movie theaters for distribution. Connecting multinational islands of data seemed to be a theme, as was scalability to petabytes, even exabytes.
This is important because EMC has yet to formally tell us just what Maui actually does. When Hulk/Maui were first discussed during EMC’s Innovation day last fall, it was assumed that Maui was the file system for Hulk’s hardware. But it turns out Hulk is shipping with Ibrix as its front-end file system, and according rumors that were going around about Maui at EMC World in May, Maui is instead a layer of software that sits above local storage pools, which could serve as a global data repository for multinational companies, tying multiple data centers together.
This even jibes with the codename – Maui is an island in Hawaii as well as the name of the Hawaiian god that raised the Hawaiian islands from the sea. Raising islands (of storage), joining them together in a chain…
Then there’s ‘zilla’s comment this morning in his original post: “the internal cloud currently stretches from the east coast of America right into China.” He also mentioned that the business plan is executing to schedule, which would mean Hulk and Maui will both be formally introduced in the third quarter.
Rackable Systems continues to adjust its business in the wake of its announcement last week that it plans to divest the clustered NAS business it acquired with Terrascale two years ago.
According to a Rackable press release, founder and current Chief Technology Officer (CTO), Giovanni Coglitore will assume all engineering and product development functions and becomes senior vice president of engineering and CTO. Rackable Systems’ Senior Vice President and Chief Products Officer, Tony Gaughan, will assume a new position within the company as senior vice president of business development and strategy. Dominic Martinelli, Rackable Systems’ current vice president of information technology, was promoted to chief information officer (CIO) and will continue to lead the IT department.
After its announcement about the clustered NAS divestiture, the company said it plans to focus on partnering for storage products rather than developing them internally. A deal with a new storage partner reportedly in the works, and analysts speculate it could be IBM’s XIV system.
After my article the other day about storage pros hoping for a VMware performance boost from pNFS, part of the new NFS 4.1 standard currently being ratified by IETF, I came across a response from Michael Eisler, NetApp’s senior technical director and NFS expert.
On his blog, Eisler writes:
Certainly all hypervisor vendors should have a pNFS client on their roadmap: it would be a neat way to automatically parallelize the I/O (and metadata) of the file systems of legacy guest operating systems that don’t have pNFS (e.g. Windows 2003 guest operating systems use NTFS, which a hypervisor can virtualize today into LUNs or files on a storage server. With pNFS on the hypervisor, the files, directories, block maps, etc. of NTFS would be automatically distributed and striped).
However NIC bonding is a solution to problems that don’t exactly intersect the problems pNFS solves. Going down a pNFS-only route in lieu of NIC bonding would lead to cases where single gigabit Ethernet bandwidth between the hypervisor’s pNFS client and a storage device is still not enough.
By the way, NFSv4.1, which pNFS is a part of, adds the capability to perform trunking at the NFS level. NFSv4.1 adds a session layer. A client establishes a session with an NFSv4.1 server. The client can create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths. I suspect NFSv4.1 trunking has the potential to “steal the show” with respect to current spot light on pNFS within the NFSv4.1 protocol. It will work with or without pNFS.
At any rate, NFSv4.1 trunking would be a way to obviate NIC bonding. Perhaps that is what Ms. Pariseau was alluding to.
Er…not exactly, but I appreciate the clarification.
NetApp has a strong working relationship with VMware, despite counting VMware’s parent EMC as its storage archrival. So NetApp CEO Dan Warmenhoven didn’t know what to make of the news when he heard VMware swapped CEOs from founder Diane Greene for former Microsoft exec Paul Maritz last month.
“When I first read the news about the change I was a bit shocked,” Warmenhoven said during NetApp’s earnings conference call Wednesday.
The next surprise for Warmenhoven came when he received a phone call from Maritz hours later. “I really want to thank Paul,” Warmenhoven said. “He placed a call to me before 1 p.m. the day he was announced as the CEO, and I’ve got to imagine that was a very busy day for him. When we did connect, he actually reaffirmed every part of the relationship we had prior, and even took some very visible actions to strengthen the relationship so we were very, very pleased. I think it’s going to be terrific.”
Warmenhoven called Maritz’s reaching out “very pragmatic,” considering there is a large pipeline of customers looking to implement VMware and NetApp storage. And with Microsoft entering the server virtualization market, VMware needs all the friends it can get. “He’s facing some significant competition coming up on the horizon, and he’s not about to jeopardize any close relationships he has,” Warmenhoven said.
Maritz has been doing a lot of reaching out in his early days as VMware CEO. Besides talking to VMware storage and server partners, he had to apologize to customers this week for a VMware bug that locked up their servers.
“You need HOW MUCH for storage?!” That question has been heard by many of us currently submitting budgets for the next calendar year, quickly followed by “Are you SURE you need that much disk? Didn’t we just get disk last year? Where did they all go!? I want your house audited. Now!”
Okay, maybe not the audit part, but for most of us, getting the type of disk we need in the quantity we need it is an uphill battle. Add SSD, deduplication, and longer-term retention to the mix, and things are getting a bit hairy with my budgetary requests. I’m at such a point now with a few of my smaller clients, and when they get that “you’re crazy” look, I bring up the chargeback model.
I think I just heard a collective sigh from the interwebs.
I understand both sides of the chargeback dilemma: the accounting side, that has to somehow keep track of all this without keeping track of all this; and the IT side, that is constantly being painted as the cost center only because no one is taking ownership of their parts of the “plumbing.” People (read departments) will request outrageous resources when they don’t have to directly foot the bill. That part I get, but are they so vehemently against accounting for their infrastructure usage?
In my opinion, chargeback would actually lead to better data management habits — at least in the long term — because if you have to pay for everything out of your own budget, then you’ll be more careful about separating what you need from what you want. How many of our managers and accounting folks have processes in place to account for each department’s use of the “utilities” that make up IT and understand that IT isn’t the root of all expenses?
I had an energetic debate with a co-worker about this very issue. I took the stance that chargeback is the way to go. He offered a more community-oriented accounting method. We went back and forth, point and counterpoint, until concluding that it just depends on what your business environment will support and the level of organization that business has in place.
For instance, if you have a well-organized, project-oriented IT environment, and have a project portfolio ready for sizing, you can plan a community budget very well and effectively fund addition to your infrastructure through a single IT budget. The reality from <i>my</I> experience (read, SMB clients) is that most companies are not so well-organized, don’t have a project portfolio for the next 12 months, and will not be able to identify budgetary requirements for infrastructure improvements.
In these cases, chargeback (or, at the very least, departmental accounting) is key to being able to answer my opening question with confidence.
Traditional SAN storage may be easy to bill for, but what of virtualized storage? Take it a step further, how about Softricity/Microsoft’s Softgrid? (Softricity is the company Microsoft acquired not too long ago that allows for application-level virtualization as opposed to host virtualization.) How do you quantify and itemize a streamed, virtualized application?
Then there’s the question floating just below the surface of the chargeback debate: How do I, as a department, know you are giving me what I’m being “billed” for? That question opens a giant can of worms in my mind (and there are already creepy crawlies up there, no need to add worms to the mix).
The crux of what I’m getting at is: Are we as technologists — and storage pros specifically — asking for too much or too little when it comes to chargeback? Are there still companies out there that don’t see the light when it comes to chargeback and departmental accounting. Should we as storage pros be leading the way for other areas of IT to follow our example?
Brocade’s earnings report today made it clear it is gaining market share in the Fibre Channel switch competition against Cisco.
No surprise there. Cisco’s earnings last week disclosed its SAN director and switch revenue dropped 14 percent year over year. Brocade today said its switch revenue grew 5 percent and director revenue was up 3 percent over last year.
Why is Brocade picking up steam? Is it because it beat Cisco to the market with 8Gbps directors and switches? Do customers prefer Brocade’s next-gen data center DCX Backbone over Cisco’s Nexus switches? Or are storage vendors – perhaps stung by perceptions that Nexus is not OEM-friendly – pushing Brocade over Cisco?
A case can be made for all three, although it’s probably too early in the DCX and Nexus product cycles to know how much any of them come into play. Brocade will have 8-gig products out for close to a year before Cisco pushes out its first 8-gig MDS directors late this year. Even Cisco’s data center solutions marketing manager Deepak Munjal told my colleague Beth Pariseau this week that Cisco lost some business from “users who absolutely need 8 Gigabit Fibre Channel” and got it from Brocade.
Brocade execs say 8-gig products are still a small minority of its sales, although its data center infrastructure division GM Ian Whiting said 8-gig is showing up more with customers building new data centers around the DCX Backbone. Brocade’s strong services results – up 43 percent year over year – are also due in large part to customers designing new data centers around the DCX, Whiting said.
Analysts asked Brocade CEO Mike Klayko on today’s earnings call about storage vendors preferring Brocade over Cisco now. He downplayed that notion, saying “I don’t think there’s a concerted effort of OEMs to go one way or other. I think we have best solution in the marketplace.”
But not everybody is sure. In a note to clients earlier this week, analyst Kaushik Roy of Pacific Growth Equities wrote that Brocade is gaining market share and “we believe that is partly because of Cisco’s lack of 8-gig blades and possibly because EMC is favoring Brocade over Cisco at this time.”
Whiting said Brocade is less likely to compete with storage vendors than Cisco. “Our role in the industry is enabling solutions like virtualization, encryption and other services,” he said. “Cisco’s vision is they expect to deliver those solutions themselves. That creates conflict among major players like the IBMs and Hewlett-Packards of the world.”
Brocade said it saw little revenue last quarter from its HBAs, which are still being qualified by storage vendors.
Overall, Brocade’s $365.7 million revenue last quarter increased 12 percent from last year.
Don’t you hate it when you go into iTunes and try to play a purchased song, and it won’t let you until it authenticates…again? Well, take away the authentication step and leave things hanging, multiply it by hundreds of enterprise virtual servers that cost far more than $0.99 each, and you’ve got what some IT managers have been experiencing today.
Thanks to an issue with VMware’s licensing code, VMs fail to power on, fail to leave suspend mode, or fail to migrate with VMotion – meaning those with HA/DR or other applications that require frequent migration of servers are having particular trouble.
According to VMware’s official site,
An issue with ESX/ESXi 3.5 Update 2 causes the product license to expire on August 12, 2008. VMware engineering has isolated the root cause of this issue and will reissue the various upgrade media including the ESX 3.5 Update 2 ISO, ESXi 3.5 Update 2 ISO, ESX 3.5 Update 2 upgrade tar and zip files by noon, PST on August 13. These will be available from the page: http://www.vmware.com/download/vi. Until then, VMware advises against upgrading to ESX/ESXi 3.5 Update 2.
“Noon PST” August 13 – around 36 hours after the bug first struck today. Needless to say, VMware users are not happy.
My fellow blogger here on the Soup, Tory Skyers, hasn’t installed update 2 yet, but the glitch worries him nonetheless. “With all the headaches for VMware over the last couple of months, like the CEO leaving, I’m flabbergasted that they of all people would let something like this happen,” he told me today. For Skyers, this kind of bug was worse than flawed code. “It’s not really a bug per se – the license expiration is supposed to do what it’s doing, just not in these circumstances. This is a QA issue,” he said.
Update: Just received a statement from VMware PR saying that while a fully-tested patch will still have to wait until tomorrow noon (PT), an “express patch” for production servers will be made available today. Let’s hope the express update is also bug-free!
This story isn’t specifically about storage vendors – it’s about airline carrier JetBlue – but I think it’s a great example of the power of the Internet and populist publishing when it comes to getting the news of your experience with a vendor across to other potential customers and etting that company to respond to you.
Bill Baker was trying to travel home on JetBlue when his flight was cancelled. He said he wasn’t so upset about the cancellation as the manner in which it was handled – no refunds, no sleeping accomodations, no agreements with other carriers to put passengers on other outbound flights.
Unfortunately, what JetBlue didn’t know about this particular passenger is that he works as a technology publicist in Connecticut, and his response was to do, I’m sure, the one thing that would have gotten the undivided attention of his own clients: tell everyone on the Internet about his bad experience.
As CNet’s Charles Cooper put it, “We’re long past the era when companies could cavalierly screw over their customers without risking public humiliation…see, there’s this thing called the Internet…”
Maybe this is true when it comes to consumer companies, but this has not been my experience at all when it comes to customers of enterprise products. You’d think that the comparitively large size of both products and price tags would make the enterprise a market even more rife with publicly-aired criticisms and calls to action, but you’d be wrong.
I find many enterprise customers fearful of speaking publicly, especially about their vendor. I understand there are big bucks and politics at stake in enterprise capital equipment purchases, but it saddens me when users act like they work for the company that sells them storage products rather than the other way around. Especially when users trade daily, arduous efforts to manage unmanageable products or problems for keeping the political peace. It would be great for the industry, I think, if more storage users started blogs like Baker’s.
Rackspace Holdings became the first company to complete an IPO since March when it priced its shares at $12.50 Thursday (the price fell to $10.01 Friday on the first day of trading).
The hosting company doesn’t specialize in storage, but is beta testing its CloudFS Web-based storage service that competes with Amazon S3 and others. Rackspace also launched a dedicated NAS (DNAS) storage service for in June aimed primarily at file sharing Web sites and companies that deal with rich media files. That’s built on NetApp FAS2000 storage, and Rackspace has an Unmetered Backup service trough a partnership with CommVault.
Only a small piece of Rackspace’s revenue ($130.8 million last quarter and $362 million in 2007) come from storage products, and it could be tough growing in the storage space with competition heating up.
“Storage is tough [for hosting companies],” Forrester Research analyst Stepanie Balaouras said. “If it’s traditional transaction oriented processing, the storage really needs to be in close physical proximity to the servers for performance reasons. If it’s non-transaction processing — you need to store Web 2.0 type of content like photos, audio, and video — you’re less performance sensitive and you can rely on storage services in the ‘cloud.’ But a lot of startups who need this type of storage service seem to be going with Amazon S3.”
Considering Amazon’s track record for outages, maybe Rackspace is getting into storage at the right time.