IBM. What to make of them these days when it comes to storage?
It’s a question I’ve heard asked a lot this week in my conversations with industry watchers and in my blog reading. Much of it came in the wake of the leak (again) on IBM’s European website of information about an upcoming product announcement.
“This now makes two ‘new platform’ storage announcements from IBM where they simply post a Web page regarding a completely new storage product on their European site and call it a day,” wrote Chuck Hollis in a blog post that got the word out about the leak. “Has IBM decided to focus its marketing efforts elsewhere, and decided not to bring much attention to their … storage business?”
The “announcement” of the XIV clustered block storage array in similar fashion earlier this month prompted similar head-scratching, and, more worrying if I’m IBM, analysts have begun to sit down and dig through the XIV specs they’ve released to the market without a single PR person or marketeer accompanying it with a message.
“Where’s the beef”? is the phrase I’ve heard used at the end of the analysts’ analyzing. Robin Harris’s StorageMojo blog post is a pretty good representation of the questions I’m also hearing from others in the wider market.
“I hope there is a cohesive strategy behind the XIV product. But so far I’m not able to even guess what it might be,” Harris concluded. “Maybe the decades of warfare between geeks and suits has so totally paralyzed the product marketing function that even the normal IBM facade can’t cover the cracks. It must be something.”
I’m no PR expert, but I have to believe this is what you have PR and marketing for – to at least try to counteract speculation like this. I’ve heard differing opinions on the reasons for the leaks this week–some close to Hollis’s, and others who say IBM has always done this kind of pre-release Web posting (other companies, like Hewlett-Packard, have been known to do it, too) . The problem is, there are many more people nowadays scouring the Web for every morsel of information they can dig up. And IBM’s competitors can quickly criticize those products via blogs, putting spin on IBM’s products before IBM does.
Perhaps the most perplexing part is that IBM is just letting rivals take their shots. As far as I can tell, they haven’t responded at all to the criticisms levied by competitors and analysts. And I can’t figure out why that would be. The cat’s out of the bag. The specs are out there. Pretending it hasn’t been announced yet and declining comment isn’t going to change that.
This isn’t the first wondering I’ve done this year about IBM. I’ve also wondered what the deal is with their DS6000 array (about which I’ve been assured it’s still in existence, but not much more information is forthcoming). I’ve wondered what the deal is with thin provisioning for the DS8000. My news director, Dave Raffo, asked them what the deal is with MAID, dedupe and thin provisioning at this year’s SNW, and got a lot of fairly vague answers.
In fairness, IBM has since acquired Diligent Technologies, finally adding dedupe to their backup hardware product line. But in the dedupe wars (which you can bet are still raging), IBM has been relatively silent.
Instead, yesterday, they sent out a press release saying they’ve developed and tested SSDs at 1 million IOPS. The press release is chock-full of verbiage about how much more technical and expert IBM researchers are and what a wealth of knowledge IBM brings to the SSD table, none of which I doubt.
But the thing is, that’s it. They’ve tested these things as part of Project Quicksilver. IBM labs are the studliest and most advanced in the world. The end, except for an intriguing but vague passage about some future products –
IBM Research has developed breakthrough data center provisioning technology that automatically understands and balances the utilization of diverse storage components in the information infrastructure, including solid-state storage. Additionally, to get the most value from high performance system resources in storage, IBM Research patented key technologies that help maintain required quality-of-service for higher priority applications.
I asked an IBM spokesperson when we’ll see product come out based on what was tested for this press release, and got the following response. “To clarify, there is no timeline/commercialization plan to discuss at this time and we’re not announcing a specific product.” As for the management software (I’m assuming), “we’re not going into specifics at this point.”
To be fair, I’ve heard some criticism recently of other vendors coming out with product pre-announcements months before product availability. But everyone in the industry has by now either launched or announced they will launch solid-state support. IBM, with its server business and experience developing memory technology, ought to be ahead of this pack. Instead, despite the fact that it’s clear they wouldn’t be testing such a thing if there were no potential revenue stream attached, they aren’t saying much else about it.
Maybe the folks running IBM storage think they don’t have to say anything. They’re still an established behemoth with a large, loyal customer base. The phrase “no one ever got fired for buying from IBM” is still thrown around, and IBM officials have argued that customers are willing to wait to get whatever technology is fashionable until they can get it in vetted form from IBM. Given its ginormous customer base, IBM says, its testing and QA processes are much more involved than other vendors, and hence, it takes longer for new technologies to hit the streets from IBM – but customers are willing to wait for the extra assurance.
Good points all, and storage buyers are a conservative lot. But IBM spent $300 million on a product it hasn’t yet promoted except to cast it as the new crown jewel of Big Blue storage. Meanwhile, people in the marketplace are beginning to tear it apart before anyone sees a PowerPoint slide. People are beginning to wonder if it wasn’t really Moshe Yanai IBM was after, and that they had to buy his startups to get him. People are starting to speculate about what’s going on internally at IBM – about a battle between geeks and suits, or that IBM is ashamed of its storage products and therefore hiding them, competitors are having a field day, and IBM’s doing nothing to counteract any of it.
What is the deal?
Detroit police are investigating the Aug. 19 death of Cisco marketing executive Benjamin Goldman, 42, who was found fatally shot outside a strip club called the Penthouse on Detroit’s Eight Mile, according to reports. So far, no one is in custody.
According to San Jose Mercury News coverage of Goldman’s memorial service, he worked 16 years at Cisco in customer-facing marketing roles.
This was originally scheduled for May, but after some delays the CERN Large Hadron Collider, which some believe will create a black hole that will swallow the Earth (beginning with France), has been put through its paces on its first test runs. According to the latest reports, launch is now set for Sept. 10. As a great man once said, “hang on to your butts.”
Personally, though, I’m a little more concerned today with reports that an upgrade to the U.S. terrorism database is not going well. But perhaps we’ve gotten to the bottom of why so many random people are on the No-Fly List. Ain’t technology grand?
Symantec Corp. released the results of its survey of 1000 IT managers and decision makers about disaster recovery for 2008 today. Among its findings was a decrease in C-level executive involvement in DR planning compared to the results for the 2007 survey, which Symantec officials said they found alarming.
In the 2007 DR survey, 55 percent of respondents said that their DR committees involved the CIO/CTO/IT director. In 2008, that number dropped to 33 percent worldwide.
“Executive complacency could be attributed to the improvement in DR testing successes,” according to the company’s survey report. Delegation of tasks to lower-level managers once the C-suite sets overall DR goals could also be at play, conceded Symantec director of product marketing for Data Protection Marty Ward. However, the survey results remain a cause for concern at Symantec, Ward said. “It’s more likely that DR is still just not seen as a basic requirement for companies – there also haven’t been as many current events lately that spur people into thinking about disaster recovery.”
As for that last statement, let’s all just take a moment to knock on wood. Meanwhile, Symantec says other results of the survey, like the fact that only 14% of chief security officers are involved in DR, point to complacency rather than delegation.
Other key findings of the study:
- Although one third of organizations have had to execute a disaster recovery plan, just under half say they can get fully operational in a week.
- The amount of applications that IT Managers believe are business critical has increased 20 percentage points over data from the previous year, and only about half of these applications are covered in DR plans.
- Virtualization is driving organizations to reevaluate their DR plans.
- Organizations report that DR testing impacts customers, sales and revenue because of the lack of tools that can address both virtual and physical environments.
On that last one, a recent customer case study we ran on the site can attest to that issue. It’s tough enough for companies to classify all data and arrange for tiered recovery while maintaining accurate and realistic RTOs and RPOs. So tough, in fact, that very few companies I’ve come across have even reached the frontier Northeast Utilities came up against – keeping the DR plan current and in working order without the operational bandwidth to complete live tests.
The analogy I’d use for this situation is to another unpleasant task – dieting. If initial DR planning is like losing weight, continued monitoring and updating for the environment is like keeping it off–in other words, the really hard part. According to the 2008 Symantec survey results, only 30 percent of tests meet RTO objectives. Only 31 percent of respondents reported that they could achieve baseline operations within one day if a significant disaster occurred that obliterated their main data center. Only 3 percent believed they have skeleton operations within 12 hours.
Not all is doom and gloom, though. “Don’t get me wrong, there has been a 10 fold increase in testing over the last decade, and one of the most encouraging things about the 2008 survey is that it showed that not only are people testing, but more people are testing successfully,” Ward said. Last year, 50% of DR tests failed. This year, that number was 30 percent. “But there are still ongoing issues.”
My Google Reader isn’t quite as busy as Robert Scoble’s, but it gets a decent workout each week. Between that, the wires and all the different pitches I get – not to mention the interesting stories I come across that are more general IT than storage-specific – I usually end up with a backlog. Every so often, I’ll clear out that backlog with a link dump. Here’s one for this week:
NetApp’s Simple Steve on how to recover corrupted photos. [Simple Steve: Photo Recovery]
The Storage Anarchist, who already broke the arrival of IBM’s XIV array, keeps pounding away at IBM. [The Storage Anarchist: How much does a free XIV array really cost?]
In case you haven’t heard, former Dell/Equallogic evangelist Marc Farley has signed on with 3PAR. One of his first vids for the 3PAR blog features mad props for the above mentioned Storage Anarchist, with low-tech farm animals in the background. [StorageRap: Props to Anarchist for Blogging Coup]
Back to storage (well, sort of). Another really enjoyable post from Steve Duplessie, with humorous anecdote about his “militaristic” attempts to recycle, how his town has thwarted them, and how it all ties in with green IT. [Steve's IT Rants: Hybrid IT]
Okay, back to storage: Curtis Preston offers his advice for home data protection. [Backup Central: Friends & Family Computer Recommendations]
While EMC’s Anarchist keeps IBM busy, another EMC’er picks on NetApp’s VTL. [The Backup Blog: NetApp's VTL is "Dangerous"]
Amazon adds more cloud storage, this time for its EC2 platform. [TechCrunchIT]
When I got to college, all I got was a POP email account and some spectacularly crappy dining hall food. Kids these days are getting iPhones and iPod Touches. Also, I just said “kids these days”, meaning I’m officially old. Thanks a lot, New York Times. [NYT Technology: Welcome, Freshmen. Have an iPod]
The San Jose Mercury News has an employee’s-eye look at the Agami shutdown. [Promising start-up abruptly shuts down]
Finally, if you only check out one item from this list, make it this one. A new blog called Where is Bob? Tales of an Absentee Manager, is one I recommend bookmarking for anyone who works in IT. It’s kind of like the IT blog equivalent of Office Space, and even involves storage-related hilarity(yes, you read that correctly):
I could see sweat forming on Marek’s forehead. I marveled at his self control, and wondered whether he was practicing zen meditation when he wasn’t hacking into the Pentagon.
“Bob.” He was speaking slowly, enunciating every syllable. “Do you know the meaning of words, back-up and eve-ry-thing?”
“What?” Bob was laughing, he was clearly in good spirits, and Marek’s accent often amused him.
“Backup. Everything.” Marek repeated even slower. I saw a few blood vessels rupture, and his left eye began to twitch violently. I knew that I had to intervene.
“Now look, Bob. What you are asking just doesn’t make sense,” I said. “You can’t have a backup of everything. You need a backup of a particular thing at a particular time.”
“I need a backup of all our servers for all time.” So, he knew that we had servers. I underestimated Bob. But he clearly didn’t understand the passage of time, so perhaps I still had an advantage.
“That’s impossible, Bob. Can’t be done.” It was one of those times when you begin regretting what you said before you even finish saying it.
“Can’t be done!” He didn’t say it like a question, and I knew what was coming. “You are one of those people who say NO all the time. No, we can’t write our own operating system! No, we can’t have a backup of everything! People hate that! You impede progress!”
“Ok, we’ll do it.” Marek gave me a classic crazy-girl-what-are-you-doing look. “Come back next Wednesday.”
When Bob returned to work on Thursday, he forgot about his outlandish backup request, and left us alone. Unfortunately, Bob forgot to mention that we were in violation of a university mandate to have redundant copies of our backups stored in an off-site location. He received the notice about our lack of compliance along with a detailed write-up of the policy. He compressed the forty page document into three incongruous words – backup of everything. So, when we learned about the violation, Marek and I had to postpone all our other projects and commitments, and scramble to make duplicates of critical backups to be sent off site along with other disaster recovery tools and documents. [Where is Bob? Welcome Party for Dave, Part I]
Even smaller private storage companies are keeping their lawyers busy these days.
Pivot3′s legal motion filed with the U.S. Trademark and Patent office against PivotStor this week is the second time in less than two weeks that small storage companies have become embroiled in lawsuits. Backup vendor Asigra sued its rival ROBObak Aug. 11, proving not only large public companies like Sun, NetApp, Quantum, and Riverbed are running up legal bills in public spats.
The latest storage lawsuit is over the companies’ names. Pivot3, which started in 2004, says PivotStor, which came around in 2007, is confusing the market by using Pivot in its name and wants it to find another. PivotStor management apparently disagrees — hence the legal motion. The vendors are not direct competitors. Pivot3 sells clusered iSCSI storage systems and Pivotstor sells email appliances and tape libraries. However, Pivot3 says people have trouble keeping the two straight.
While the fundamental issue between the Pivot3 and Asigra suits are different – Asigra is suing Robobak for libel over claims made in press releases and advertisements – there are two similarities. First, the company getting sued is the lesser know of the two, which means Robobak and PivotStor could benefit from the free publicity.
The second similarity is both defendants rely on Steve Friedberg of MMI Communications for public relations. While Friedberg is probably spending too much time talking to lawyers these days to agree, a cynic would credit him for pulling off two PR coups.
Everyone and their brother has an email archiving story to tell you these days, or so it seems. But Forrester Research analyst Jo Maitland told Forrester clients in a teleconference titled “Email Archiving Mistakes to Avoid” to keep things simple in their selection of a product and setting of policies.
Users need to begin with a strategy that addresses backup and archiving separately (apparently not everyone in the storage industry read Mr W. Backup’s definitive “Backups are not Archives” article a couple years ago…). Then, they should take into account their requirements for the deployment – whether it will be for end user restore/Exchange optimization, or for legal discovery.
According to Maitland, this is the most crucial step in determining which product will work best in a given environment, and one not everyone clearly understands. This isn’t helped by an overcrowded market with vendors trying to shout over each other with ever-more-complex features, but Maitland boiled it down to a few key things. An archive for e-Discovery should mark data for legal hold and notify an administrator when new content hits an existing search; those seeking an archive for legal discovery should also try to look for one that covers more data types than just email.
For email optimization and end user restore, the product should allow access to emails via a Web browser, automatically copy messages to the archive and delete them from primary storage (too many stub files can still clog up the mail server), and allow simple retrieval back to the inbox.
The two purposes for an archive - eDiscovery and end user restore – can be mutually exclusive, Maitland said.
Once the requirements are determined, Maitland advised that policies be set – and once again, kept as simple as possible. “Nirvana policies are not practical,” she said. If policies are too strict or too lax, she pointed out, “everybody ignores the policy and finds underground ways of keeping their data anyway.” A 30-day deletion policy, moreover, “flies in the face of 10 years of best practices in records management,” and can still expose a company to risk when it needs some data to defend itself. But keeping data forever quickly overwhelms today’s search and indexing tools.
While policy-setting is still an area of unavoidable complexity, Maitland also emphasized that users won’t necessarily need all the features in every archiving and e-Discovery product. WORM, for example “is overkill for most things.” Instead, if a company really needs WORM for a subset of data, she advocated a tiered strategy where only the data that really needs WORM protection is migrated and stored on a WORM system.
So, yes, with this type of tiered approach, it means ongoing management, something Maitland said admins often overlook when planning an archiving strategy. “With archiving today it can’t be just plug it in and forget it,” Maitland said. “Email archiving is a strategic project, not just a quick fix to manage performance or service levels – it aims to manage information for the long term.”
EMC blogger Storagezilla posted an interesting Flash animated video this morning about Maui, titled CloudFellas, in a post that has since been whacked. In the original post, ‘zilla alluded to ‘getting too far out in front of the boss’, so maybe that’s what happened (the post has been deleted from Google’s cache as well).
The video showed fun little animations about the spread of data to points around the world, and gave the example of a movie project where dailies from the set have to be sent to production houses for editing, then from the production houses back to the studio for vetting, and eventually out to movie theaters for distribution. Connecting multinational islands of data seemed to be a theme, as was scalability to petabytes, even exabytes.
This is important because EMC has yet to formally tell us just what Maui actually does. When Hulk/Maui were first discussed during EMC’s Innovation day last fall, it was assumed that Maui was the file system for Hulk’s hardware. But it turns out Hulk is shipping with Ibrix as its front-end file system, and according rumors that were going around about Maui at EMC World in May, Maui is instead a layer of software that sits above local storage pools, which could serve as a global data repository for multinational companies, tying multiple data centers together.
This even jibes with the codename – Maui is an island in Hawaii as well as the name of the Hawaiian god that raised the Hawaiian islands from the sea. Raising islands (of storage), joining them together in a chain…
Then there’s ‘zilla’s comment this morning in his original post: “the internal cloud currently stretches from the east coast of America right into China.” He also mentioned that the business plan is executing to schedule, which would mean Hulk and Maui will both be formally introduced in the third quarter.
Rackable Systems continues to adjust its business in the wake of its announcement last week that it plans to divest the clustered NAS business it acquired with Terrascale two years ago.
According to a Rackable press release, founder and current Chief Technology Officer (CTO), Giovanni Coglitore will assume all engineering and product development functions and becomes senior vice president of engineering and CTO. Rackable Systems’ Senior Vice President and Chief Products Officer, Tony Gaughan, will assume a new position within the company as senior vice president of business development and strategy. Dominic Martinelli, Rackable Systems’ current vice president of information technology, was promoted to chief information officer (CIO) and will continue to lead the IT department.
After its announcement about the clustered NAS divestiture, the company said it plans to focus on partnering for storage products rather than developing them internally. A deal with a new storage partner reportedly in the works, and analysts speculate it could be IBM’s XIV system.
After my article the other day about storage pros hoping for a VMware performance boost from pNFS, part of the new NFS 4.1 standard currently being ratified by IETF, I came across a response from Michael Eisler, NetApp’s senior technical director and NFS expert.
On his blog, Eisler writes:
Certainly all hypervisor vendors should have a pNFS client on their roadmap: it would be a neat way to automatically parallelize the I/O (and metadata) of the file systems of legacy guest operating systems that don’t have pNFS (e.g. Windows 2003 guest operating systems use NTFS, which a hypervisor can virtualize today into LUNs or files on a storage server. With pNFS on the hypervisor, the files, directories, block maps, etc. of NTFS would be automatically distributed and striped).
However NIC bonding is a solution to problems that don’t exactly intersect the problems pNFS solves. Going down a pNFS-only route in lieu of NIC bonding would lead to cases where single gigabit Ethernet bandwidth between the hypervisor’s pNFS client and a storage device is still not enough.
By the way, NFSv4.1, which pNFS is a part of, adds the capability to perform trunking at the NFS level. NFSv4.1 adds a session layer. A client establishes a session with an NFSv4.1 server. The client can create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths. I suspect NFSv4.1 trunking has the potential to “steal the show” with respect to current spot light on pNFS within the NFSv4.1 protocol. It will work with or without pNFS.
At any rate, NFSv4.1 trunking would be a way to obviate NIC bonding. Perhaps that is what Ms. Pariseau was alluding to.
Er…not exactly, but I appreciate the clarification.