This was originally scheduled for May, but after some delays the CERN Large Hadron Collider, which some believe will create a black hole that will swallow the Earth (beginning with France), has been put through its paces on its first test runs. According to the latest reports, launch is now set for Sept. 10. As a great man once said, “hang on to your butts.”
Personally, though, I’m a little more concerned today with reports that an upgrade to the U.S. terrorism database is not going well. But perhaps we’ve gotten to the bottom of why so many random people are on the No-Fly List. Ain’t technology grand?
Symantec Corp. released the results of its survey of 1000 IT managers and decision makers about disaster recovery for 2008 today. Among its findings was a decrease in C-level executive involvement in DR planning compared to the results for the 2007 survey, which Symantec officials said they found alarming.
In the 2007 DR survey, 55 percent of respondents said that their DR committees involved the CIO/CTO/IT director. In 2008, that number dropped to 33 percent worldwide.
“Executive complacency could be attributed to the improvement in DR testing successes,” according to the company’s survey report. Delegation of tasks to lower-level managers once the C-suite sets overall DR goals could also be at play, conceded Symantec director of product marketing for Data Protection Marty Ward. However, the survey results remain a cause for concern at Symantec, Ward said. “It’s more likely that DR is still just not seen as a basic requirement for companies – there also haven’t been as many current events lately that spur people into thinking about disaster recovery.”
As for that last statement, let’s all just take a moment to knock on wood. Meanwhile, Symantec says other results of the survey, like the fact that only 14% of chief security officers are involved in DR, point to complacency rather than delegation.
Other key findings of the study:
- Although one third of organizations have had to execute a disaster recovery plan, just under half say they can get fully operational in a week.
- The amount of applications that IT Managers believe are business critical has increased 20 percentage points over data from the previous year, and only about half of these applications are covered in DR plans.
- Virtualization is driving organizations to reevaluate their DR plans.
- Organizations report that DR testing impacts customers, sales and revenue because of the lack of tools that can address both virtual and physical environments.
On that last one, a recent customer case study we ran on the site can attest to that issue. It’s tough enough for companies to classify all data and arrange for tiered recovery while maintaining accurate and realistic RTOs and RPOs. So tough, in fact, that very few companies I’ve come across have even reached the frontier Northeast Utilities came up against – keeping the DR plan current and in working order without the operational bandwidth to complete live tests.
The analogy I’d use for this situation is to another unpleasant task – dieting. If initial DR planning is like losing weight, continued monitoring and updating for the environment is like keeping it off–in other words, the really hard part. According to the 2008 Symantec survey results, only 30 percent of tests meet RTO objectives. Only 31 percent of respondents reported that they could achieve baseline operations within one day if a significant disaster occurred that obliterated their main data center. Only 3 percent believed they have skeleton operations within 12 hours.
Not all is doom and gloom, though. “Don’t get me wrong, there has been a 10 fold increase in testing over the last decade, and one of the most encouraging things about the 2008 survey is that it showed that not only are people testing, but more people are testing successfully,” Ward said. Last year, 50% of DR tests failed. This year, that number was 30 percent. “But there are still ongoing issues.”
My Google Reader isn’t quite as busy as Robert Scoble’s, but it gets a decent workout each week. Between that, the wires and all the different pitches I get – not to mention the interesting stories I come across that are more general IT than storage-specific – I usually end up with a backlog. Every so often, I’ll clear out that backlog with a link dump. Here’s one for this week:
NetApp’s Simple Steve on how to recover corrupted photos. [Simple Steve: Photo Recovery]
The Storage Anarchist, who already broke the arrival of IBM’s XIV array, keeps pounding away at IBM. [The Storage Anarchist: How much does a free XIV array really cost?]
In case you haven’t heard, former Dell/Equallogic evangelist Marc Farley has signed on with 3PAR. One of his first vids for the 3PAR blog features mad props for the above mentioned Storage Anarchist, with low-tech farm animals in the background. [StorageRap: Props to Anarchist for Blogging Coup]
Back to storage (well, sort of). Another really enjoyable post from Steve Duplessie, with humorous anecdote about his “militaristic” attempts to recycle, how his town has thwarted them, and how it all ties in with green IT. [Steve’s IT Rants: Hybrid IT]
Okay, back to storage: Curtis Preston offers his advice for home data protection. [Backup Central: Friends & Family Computer Recommendations]
While EMC’s Anarchist keeps IBM busy, another EMC’er picks on NetApp’s VTL. [The Backup Blog: NetApp’s VTL is “Dangerous”]
Amazon adds more cloud storage, this time for its EC2 platform. [TechCrunchIT]
When I got to college, all I got was a POP email account and some spectacularly crappy dining hall food. Kids these days are getting iPhones and iPod Touches. Also, I just said “kids these days”, meaning I’m officially old. Thanks a lot, New York Times. [NYT Technology: Welcome, Freshmen. Have an iPod]
The San Jose Mercury News has an employee’s-eye look at the Agami shutdown. [Promising start-up abruptly shuts down]
Finally, if you only check out one item from this list, make it this one. A new blog called Where is Bob? Tales of an Absentee Manager, is one I recommend bookmarking for anyone who works in IT. It’s kind of like the IT blog equivalent of Office Space, and even involves storage-related hilarity(yes, you read that correctly):
I could see sweat forming on Marek’s forehead. I marveled at his self control, and wondered whether he was practicing zen meditation when he wasn’t hacking into the Pentagon.
“Bob.” He was speaking slowly, enunciating every syllable. “Do you know the meaning of words, back-up and eve-ry-thing?”
“What?” Bob was laughing, he was clearly in good spirits, and Marek’s accent often amused him.
“Backup. Everything.” Marek repeated even slower. I saw a few blood vessels rupture, and his left eye began to twitch violently. I knew that I had to intervene.
“Now look, Bob. What you are asking just doesn’t make sense,” I said. “You can’t have a backup of everything. You need a backup of a particular thing at a particular time.”
“I need a backup of all our servers for all time.” So, he knew that we had servers. I underestimated Bob. But he clearly didn’t understand the passage of time, so perhaps I still had an advantage.
“That’s impossible, Bob. Can’t be done.” It was one of those times when you begin regretting what you said before you even finish saying it.
“Can’t be done!” He didn’t say it like a question, and I knew what was coming. “You are one of those people who say NO all the time. No, we can’t write our own operating system! No, we can’t have a backup of everything! People hate that! You impede progress!”
“Ok, we’ll do it.” Marek gave me a classic crazy-girl-what-are-you-doing look. “Come back next Wednesday.”
When Bob returned to work on Thursday, he forgot about his outlandish backup request, and left us alone. Unfortunately, Bob forgot to mention that we were in violation of a university mandate to have redundant copies of our backups stored in an off-site location. He received the notice about our lack of compliance along with a detailed write-up of the policy. He compressed the forty page document into three incongruous words – backup of everything. So, when we learned about the violation, Marek and I had to postpone all our other projects and commitments, and scramble to make duplicates of critical backups to be sent off site along with other disaster recovery tools and documents. [Where is Bob? Welcome Party for Dave, Part I]
Even smaller private storage companies are keeping their lawyers busy these days.
Pivot3’s legal motion filed with the U.S. Trademark and Patent office against PivotStor this week is the second time in less than two weeks that small storage companies have become embroiled in lawsuits. Backup vendor Asigra sued its rival ROBObak Aug. 11, proving not only large public companies like Sun, NetApp, Quantum, and Riverbed are running up legal bills in public spats.
The latest storage lawsuit is over the companies’ names. Pivot3, which started in 2004, says PivotStor, which came around in 2007, is confusing the market by using Pivot in its name and wants it to find another. PivotStor management apparently disagrees — hence the legal motion. The vendors are not direct competitors. Pivot3 sells clusered iSCSI storage systems and Pivotstor sells email appliances and tape libraries. However, Pivot3 says people have trouble keeping the two straight.
While the fundamental issue between the Pivot3 and Asigra suits are different – Asigra is suing Robobak for libel over claims made in press releases and advertisements — there are two similarities. First, the company getting sued is the lesser know of the two, which means Robobak and PivotStor could benefit from the free publicity.
The second similarity is both defendants rely on Steve Friedberg of MMI Communications for public relations. While Friedberg is probably spending too much time talking to lawyers these days to agree, a cynic would credit him for pulling off two PR coups.
Everyone and their brother has an email archiving story to tell you these days, or so it seems. But Forrester Research analyst Jo Maitland told Forrester clients in a teleconference titled “Email Archiving Mistakes to Avoid” to keep things simple in their selection of a product and setting of policies.
Users need to begin with a strategy that addresses backup and archiving separately (apparently not everyone in the storage industry read Mr W. Backup’s definitive “Backups are not Archives” article a couple years ago…). Then, they should take into account their requirements for the deployment – whether it will be for end user restore/Exchange optimization, or for legal discovery.
According to Maitland, this is the most crucial step in determining which product will work best in a given environment, and one not everyone clearly understands. This isn’t helped by an overcrowded market with vendors trying to shout over each other with ever-more-complex features, but Maitland boiled it down to a few key things. An archive for e-Discovery should mark data for legal hold and notify an administrator when new content hits an existing search; those seeking an archive for legal discovery should also try to look for one that covers more data types than just email.
For email optimization and end user restore, the product should allow access to emails via a Web browser, automatically copy messages to the archive and delete them from primary storage (too many stub files can still clog up the mail server), and allow simple retrieval back to the inbox.
The two purposes for an archive – eDiscovery and end user restore – can be mutually exclusive, Maitland said.
Once the requirements are determined, Maitland advised that policies be set – and once again, kept as simple as possible. “Nirvana policies are not practical,” she said. If policies are too strict or too lax, she pointed out, “everybody ignores the policy and finds underground ways of keeping their data anyway.” A 30-day deletion policy, moreover, “flies in the face of 10 years of best practices in records management,” and can still expose a company to risk when it needs some data to defend itself. But keeping data forever quickly overwhelms today’s search and indexing tools.
While policy-setting is still an area of unavoidable complexity, Maitland also emphasized that users won’t necessarily need all the features in every archiving and e-Discovery product. WORM, for example “is overkill for most things.” Instead, if a company really needs WORM for a subset of data, she advocated a tiered strategy where only the data that really needs WORM protection is migrated and stored on a WORM system.
So, yes, with this type of tiered approach, it means ongoing management, something Maitland said admins often overlook when planning an archiving strategy. “With archiving today it can’t be just plug it in and forget it,” Maitland said. “Email archiving is a strategic project, not just a quick fix to manage performance or service levels – it aims to manage information for the long term.”
EMC blogger Storagezilla posted an interesting Flash animated video this morning about Maui, titled CloudFellas, in a post that has since been whacked. In the original post, ‘zilla alluded to ‘getting too far out in front of the boss’, so maybe that’s what happened (the post has been deleted from Google’s cache as well).
The video showed fun little animations about the spread of data to points around the world, and gave the example of a movie project where dailies from the set have to be sent to production houses for editing, then from the production houses back to the studio for vetting, and eventually out to movie theaters for distribution. Connecting multinational islands of data seemed to be a theme, as was scalability to petabytes, even exabytes.
This is important because EMC has yet to formally tell us just what Maui actually does. When Hulk/Maui were first discussed during EMC’s Innovation day last fall, it was assumed that Maui was the file system for Hulk’s hardware. But it turns out Hulk is shipping with Ibrix as its front-end file system, and according rumors that were going around about Maui at EMC World in May, Maui is instead a layer of software that sits above local storage pools, which could serve as a global data repository for multinational companies, tying multiple data centers together.
This even jibes with the codename – Maui is an island in Hawaii as well as the name of the Hawaiian god that raised the Hawaiian islands from the sea. Raising islands (of storage), joining them together in a chain…
Then there’s ‘zilla’s comment this morning in his original post: “the internal cloud currently stretches from the east coast of America right into China.” He also mentioned that the business plan is executing to schedule, which would mean Hulk and Maui will both be formally introduced in the third quarter.
Rackable Systems continues to adjust its business in the wake of its announcement last week that it plans to divest the clustered NAS business it acquired with Terrascale two years ago.
According to a Rackable press release, founder and current Chief Technology Officer (CTO), Giovanni Coglitore will assume all engineering and product development functions and becomes senior vice president of engineering and CTO. Rackable Systems’ Senior Vice President and Chief Products Officer, Tony Gaughan, will assume a new position within the company as senior vice president of business development and strategy. Dominic Martinelli, Rackable Systems’ current vice president of information technology, was promoted to chief information officer (CIO) and will continue to lead the IT department.
After its announcement about the clustered NAS divestiture, the company said it plans to focus on partnering for storage products rather than developing them internally. A deal with a new storage partner reportedly in the works, and analysts speculate it could be IBM’s XIV system.
After my article the other day about storage pros hoping for a VMware performance boost from pNFS, part of the new NFS 4.1 standard currently being ratified by IETF, I came across a response from Michael Eisler, NetApp’s senior technical director and NFS expert.
On his blog, Eisler writes:
Certainly all hypervisor vendors should have a pNFS client on their roadmap: it would be a neat way to automatically parallelize the I/O (and metadata) of the file systems of legacy guest operating systems that don’t have pNFS (e.g. Windows 2003 guest operating systems use NTFS, which a hypervisor can virtualize today into LUNs or files on a storage server. With pNFS on the hypervisor, the files, directories, block maps, etc. of NTFS would be automatically distributed and striped).
However NIC bonding is a solution to problems that don’t exactly intersect the problems pNFS solves. Going down a pNFS-only route in lieu of NIC bonding would lead to cases where single gigabit Ethernet bandwidth between the hypervisor’s pNFS client and a storage device is still not enough.
By the way, NFSv4.1, which pNFS is a part of, adds the capability to perform trunking at the NFS level. NFSv4.1 adds a session layer. A client establishes a session with an NFSv4.1 server. The client can create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths. I suspect NFSv4.1 trunking has the potential to “steal the show” with respect to current spot light on pNFS within the NFSv4.1 protocol. It will work with or without pNFS.
At any rate, NFSv4.1 trunking would be a way to obviate NIC bonding. Perhaps that is what Ms. Pariseau was alluding to.
Er…not exactly, but I appreciate the clarification.
NetApp has a strong working relationship with VMware, despite counting VMware’s parent EMC as its storage archrival. So NetApp CEO Dan Warmenhoven didn’t know what to make of the news when he heard VMware swapped CEOs from founder Diane Greene for former Microsoft exec Paul Maritz last month.
“When I first read the news about the change I was a bit shocked,” Warmenhoven said during NetApp’s earnings conference call Wednesday.
The next surprise for Warmenhoven came when he received a phone call from Maritz hours later. “I really want to thank Paul,” Warmenhoven said. “He placed a call to me before 1 p.m. the day he was announced as the CEO, and I’ve got to imagine that was a very busy day for him. When we did connect, he actually reaffirmed every part of the relationship we had prior, and even took some very visible actions to strengthen the relationship so we were very, very pleased. I think it’s going to be terrific.”
Warmenhoven called Maritz’s reaching out “very pragmatic,” considering there is a large pipeline of customers looking to implement VMware and NetApp storage. And with Microsoft entering the server virtualization market, VMware needs all the friends it can get. “He’s facing some significant competition coming up on the horizon, and he’s not about to jeopardize any close relationships he has,” Warmenhoven said.
Maritz has been doing a lot of reaching out in his early days as VMware CEO. Besides talking to VMware storage and server partners, he had to apologize to customers this week for a VMware bug that locked up their servers.
“You need HOW MUCH for storage?!” That question has been heard by many of us currently submitting budgets for the next calendar year, quickly followed by “Are you SURE you need that much disk? Didn’t we just get disk last year? Where did they all go!? I want your house audited. Now!”
Okay, maybe not the audit part, but for most of us, getting the type of disk we need in the quantity we need it is an uphill battle. Add SSD, deduplication, and longer-term retention to the mix, and things are getting a bit hairy with my budgetary requests. I’m at such a point now with a few of my smaller clients, and when they get that “you’re crazy” look, I bring up the chargeback model.
I think I just heard a collective sigh from the interwebs.
I understand both sides of the chargeback dilemma: the accounting side, that has to somehow keep track of all this without keeping track of all this; and the IT side, that is constantly being painted as the cost center only because no one is taking ownership of their parts of the “plumbing.” People (read departments) will request outrageous resources when they don’t have to directly foot the bill. That part I get, but are they so vehemently against accounting for their infrastructure usage?
In my opinion, chargeback would actually lead to better data management habits — at least in the long term — because if you have to pay for everything out of your own budget, then you’ll be more careful about separating what you need from what you want. How many of our managers and accounting folks have processes in place to account for each department’s use of the “utilities” that make up IT and understand that IT isn’t the root of all expenses?
I had an energetic debate with a co-worker about this very issue. I took the stance that chargeback is the way to go. He offered a more community-oriented accounting method. We went back and forth, point and counterpoint, until concluding that it just depends on what your business environment will support and the level of organization that business has in place.
For instance, if you have a well-organized, project-oriented IT environment, and have a project portfolio ready for sizing, you can plan a community budget very well and effectively fund addition to your infrastructure through a single IT budget. The reality from <i>my</I> experience (read, SMB clients) is that most companies are not so well-organized, don’t have a project portfolio for the next 12 months, and will not be able to identify budgetary requirements for infrastructure improvements.
In these cases, chargeback (or, at the very least, departmental accounting) is key to being able to answer my opening question with confidence.
Traditional SAN storage may be easy to bill for, but what of virtualized storage? Take it a step further, how about Softricity/Microsoft’s Softgrid? (Softricity is the company Microsoft acquired not too long ago that allows for application-level virtualization as opposed to host virtualization.) How do you quantify and itemize a streamed, virtualized application?
Then there’s the question floating just below the surface of the chargeback debate: How do I, as a department, know you are giving me what I’m being “billed” for? That question opens a giant can of worms in my mind (and there are already creepy crawlies up there, no need to add worms to the mix).
The crux of what I’m getting at is: Are we as technologists — and storage pros specifically — asking for too much or too little when it comes to chargeback? Are there still companies out there that don’t see the light when it comes to chargeback and departmental accounting. Should we as storage pros be leading the way for other areas of IT to follow our example?