It’s the talk of the IT industry this week, whether you’re in servers, software, or storage–the Wall Street Journal report that IBM is in talks to acquire Sun for — let me write this out — six point five biiiiiiiiiiiilllion dollars. Also known as a 100% premium on current Sun valuation.
I know storage isn’t the major focus of the deal, and it’s more likely driven by Hewlett-Packard, Dell and perhaps server newcomer Cisco putting pressure on IBM’s other divisions. But when it comes to storage, if this merger does take place, all I can see right now is a hot mess. A hot mess with tentacles, even.
It’s not just that both IBM and Sun have less-than-ideal track records when it comes to assimilating recent large storage acquisitions–i.e. Sun’s $4.1 billion attempt to swallow StorageTek and last year’s smaller debacle with IBM’s XIV product. Or that, as former Sun evangelist and current consultant with Capstone Technology Taylor Allis put it, such a merger would represent a culture clash–“East Coast company with its strength in business acumen meets West-Coast Bermuda shorts technology innovation.”
As far as storage goes, one of Sun’s strongest revenue streams is the declining tape market. But what really catches my attention about this possible merger is the tangled web of conflict it would create in the rest of the storage market.
Two major problems stand out immediately, though one of them is the granddaddy of the other. First there’s the matter of high-end disk arrays to consider. IBM has been knocked for its lack of innovation in its DS series of arrays, but has also steadfastly insisted it’s going to keep selling them. There’s already some positioning to work out in the portfolio between the DS series and XIV, and if this merger goes through, you can throw Sun’s longstanding partnership with HDS for high-end disk arrays into the mix. Analytico’s Tom Trainer suggested to me once that IBM may consider OEMing a product from one of its rivals in high end storage, but would they really do such a thing?
Whether they do or don’t sell HDS, there would most assuredly be a struggle over which direction to go. While we’re at it, what of FalconStor, still a partner to Sun for VTL but dropped by IBM in favor of Diligent? “They would have to have colossal arguments,” said Allis. “Between the mix of cultures and a lot of product overlap, it would be a long, painful road.”
But the really big storage elephant in the room here is NetApp.
IBM OEMs practically NetApp’s entire product line as the N series. NetApp claims Sun steps on its WAFL toes with ZFS, and there’s an ongoing lawsuit over the matter. If IBM acquires Sun, does NetApp sue its own best partner? Does IBM drop NetApp? Does IBM attempt to make Sun and NetApp play nice together? Would acquiring Sun amount to IBM taking a position in the copyright fight, where Sun already claims to be winning?
Are any of those scenarios particularly palatable, especially for customers who are invested in IBM’s N-series precisely because it has the IBM brand associated with it, and therefore ostensibly an assurance that IBM would stand behind that product?
Regardless, those considerations probably will be subsumed in favor of the potential combination of OpenSolaris and ZFS with the “software mainframe” idea VMware has been advancing. It’s no secret IBM is enamored with the cloud, and will need a strong strategy and offering in that space. “The embedded storage capabilities in ZFS like software RAID baked into the operating system are similar to DFSMS in the mainframe world,” Allis pointed out. And at least one partnership deal could be simplified in the merger, as both IBM and Sun OEM LSI storage.
Far be it for me to question the value of the software mainframe. That’s not my beat. But if this deal goes through, the storage industry had better buckle up.
It turns out Dell/EqualLogic isn’t the only vendor with an all-iSCSI SAN system that supports solid state drives (SSDs).
StoneyFly actually beat Dell to its official announcement of the PS6000 today with its rollout of the Voyager, an active-active midrange IP SAN. Voyager is available with Intel X-25E SSDs, or SAS or SATA drives. It also includes a 10-Gigabit Ethernet option – something that is apparently still on the drawing board at Dell.
It seems like just yesterday – or maybe this morning – that SSDs were considered something for super high performance tier 0 applications. Is anybody using iSCSI for that?
StoneFly product manager Jame Ervin says demand is high for SSD as well as 10-GigE.
“We have customers ready to go with SSD the minute it’s available,” she said. “They’re looking for any performance benefit they can get.”
As for 10-gigE, Ervin says it has the greatest appeal among service providers. “Those type of customers seem to have the most interest so far. They seem to be ahead of the game adopting new technologies,” she said.
IDC research analyst Liz Conner doesn’t expect SSD to be a large selling point for iSCSI before 2010, but with iSCSI use growing and budgets shrinking it’s worth taking a look at now.
“I see it popping up more for Fibre Channel SANs, but it will happen with iSCSI,” Conner said. “Most people will go with Fibre Channel for high performance, but with iSCSI offering more high-end features and the way the economy is now, it’s more of an option.”
Conner says 10-gigE is inevitable for iSCSI. “They can say, ‘we’ve got it now, you won’t have to upgrade later,’” she said of StoneFly’s strategy.
Cisco execs led by CEO John Chambers spent 90 minutes today on a webcast telling groups spread around the world what most of the IT world already knew – it is getting into the server business. But to Cisco’s way of spinning it, it is going beyond servers to a new data center architecture that will include networking, virtualization, a unified fabric, the cloud, and storage.
Nobody spent much time talking about individual products during the webcast, which included partners from EMC, VMware, Intel, Microsoft, BMC Software, and Accenture. Cisco has a diagram of the main pieces of its Unified Computing System on its web site, although details such as general availability and pricing are still to come.
We still don’t know the role storage will play in Cisco’s new world either, although EMC CEO Joe Tucci appeared on the webcast to give his blessing and EMC blogger Chuck Hollis today applauded Cisco’s bravery. Cisco also listed NetApp, Emulex, and QLogic among its partners. Considering those vendors along with EMC are also Cisco’s core Fibre Channel over Ethernet (FCoE) allies, it seems as if FCoE is as far as Cisco’s storage plans go for now.
Chambers and other Cisco execs said today’s session was mainly about describing an architecture and partner ecosystem with more product details to follow. And Tucci hinted that EMC will work more closely with Cisco’s unified computing products than it does with Cisco’s MDS Fibre Channel switches. “We will make sure our storage systems are not only qualified with, but really tuned to bring value to this Unified Computing System,” he said.
It’s also important to consider those missing from Cisco’s ecosystem – server vendors Hewlett-Packard, IBM, Dell and Sun, and storage/networking connectivity rival Brocade. For Cisco to succeed with its new unified architecture, it must successfully compete with those vendors who also partner with most of Cisco’s unified computing allies. And in the case of the server vendors, Cisco is going head-to-head with its own Ethernet partners.
So Cisco’s unified computing may play a divisive role with other key technology players. While Cisco customers may be eager to sign on, how will those who prefer open interfaces and standards react?
“Cisco has to get an entire ecosystem participating for its technology to really go into next phase,” StorageIO Group analyst Greg Schulz says. “To go into that market full tilt, they have to step all over IBM, Sun, HP, and potentially Intel white box customers. Are they really serious about wanting to take market share from their key partners? And how will the others respond?”
Dell is about to add solid state drive (SSD) support to its EqualLogic iSCSI SANs in a new PS6000 model. As first reported on ChannelWeb last Friday, the PS6000 will also have four Ethernet ports, one more than EqualLogic PS5000 arrays have.
Dell officials did not return requests for comment today by Storage Soup, but several industry sources have confirmed the report is accurate and say the PS6000S will support 16 solid-state drives. A PS6000E will also be available with only SATA drives, according to one customer who asked not to be named because the product has not yet been formally released.
The general opinion on solid state is that customers will hold out for higher capacities and other features before they buy. Several EqualLogic customers reached by Storage Soup today said they still found SATA drives adequate for their needs.
However, according to Alan J. Hunt, Manager of Operations for Dickinson Wright PLLC, “It’s just the beginning of the market. In a few years I suspect [SSDs are] all we’re going to have–it’s kind of the beginning of the next big wave.”
Hunt added that a fourth port on EqualLogic’s arrays could be more significant than it might appear. “A fourth port means you would have balance if you have two switches and want redundancy,” he said. “Or you could make it a dedicated management port and still have three ports.”
Missing from the coming product update, if reports are accurate, would be 10-Gigabit Ethernet support, which a Dell spokesperson said last year is on the EqualLogic roadmap for 2009. But like with SSDs, EqualLogic customers and resellers say 10-Gig Ethernet can wait.
“I haven’t seen a susstantial interest in 10 Gig,” said Broadleaf Services account executive and EqualLogic VAR Christopher Baer. “There aren’t a lot of applications that require that kind of throughput yet.”
Other customers say 10-GigE would future-proof the array, even if they don’t need the bandwidth quite yet. “Why add another port? Why not 10-GigE and really get this thing going?” said a customer in the education field who requested that his name not be used as he is not authorized to speak with the press. Some pieces of the IT infrastructure in this user’s shop have been upgraded to 10-GigE already, including the network backbone.
“I don’t personally need the bandwidth,” Hunt said. “But it could add the ability to reduce the number of cables and passthrough modules for blades, as well as greatly simplifying VMware deployments.”
While most storage vendors have tempered expectations for early 2009, STEC is predicting a healthy expansion of its solid state drive (SSD) business for enterprise storage arrays.
STEC Thursday predicted its ZeusIOPS product for enterprise storage will command more revenue for the first six months of this year than it did in all of 2008. And STEC beat its projection of $50 million for 2008 by realizing $53 million in ZeusIOPS revenue.
If you’ve been following the storage industry lately, STEC’s optimism should be no surprise. Storage vendors have been lining up to offer ZeusIOPS SSDs in their arrays. EMC, Hitachi Data Systems, Sun, Hewlett-Packard and IBM are on board. Most of those vendors are just getting started. EMC, the first storage vendor to partner with STEC, sold every SSD it had last year according to EMC CEO Joe Tucci.
According to an STEC SEC filing, EMC accounted for 15.2% of STEC’s $227.4 million total revenue last year, which would put EMC’s piece of STEC revenue to $34.6 million. That’s the bulk of ZeusIOPS sale.
As Beth Pariseau’s story on SearchStorage today reveals, SSDs will remain a niche product in storage arrays until the cost comes down and management features improve. But sales will obviously get a boost this year from having more vendors pushing it.
STEC didn’t give many other details about its customers during its earnings conference call Thursday and has yet to officially confirm IBM or HP as OEM partners, but CEO Manouch Moshayedi did say ZeusIOPS revenue was split about evenly among Fibre Channel, SAS, and SATA interfaces.
Moshayedi also said STEC is cutting back on its DRAM product to concentrate on SSD for enterprise storage.
“ZeusIOPS is where we’re really putting all our emphasis,” he said.
Moshayedi doesn’t seem to expect a thaw in STEC’s relationship with Seagate now that the drive makers have dropped their lawsuits against each other. When asked if STEC might see Seagate as a partner, Moshayedi gave Seagate a cold shoulder.
“I think I am doing quite well by myself, without needing anyone else,” he said. “Our ZeusIOPS business is going through the roof. So I don’t need anyone else to help me with that. I’ve got the best customers in the world wanting my ZeusIOPS. Our ZeusIOPS is known to be the best drive out there. Everyone is trying to copy it. They have tried for two years, they have failed. And I think we will continue on just trucking with this product for the next few years until something else comes along.”
You know the drill–some stories you may have missed this week:
- (0:27) Pillar adds solid-state disks to Axiom arrays
- (1:17) HP puts solid state in EVA storage arrays
- (2:09) Texas Memory brings out PCIe-based solid state
- (2:31) EMC CEO drops storage product hints at investors’ forum
- (3:00) Isilon expands with transactional and archive systems
- (3:52) Sun flashes new NAND module
- (5:10) Seagate and AMD show off 6-gig SATA drives
Looking for the latest storage news, trends and analysis? http://www.searchstorage.com/news.
Sanrad, maker of the iSCSI storage virtualization gateway V-Switch and virtual server HA storage subsystem V-Stor, is adding a new quality of service feature to both its products.
Sanrad VP of product management Allon Cohen, “Storage-intensive I/O can be a barrier to virtualization of some applications.” The company’s quality of service aims to overcome that by letting users throttle and reserve bandwidth for the most performance-intensive applications. This is similar to features offered by array vendors such as EMC and Pillar Data Systems.
However, unlike those storage arrays, Sanrad’s first QoS release will allow for two levels of service only, according to type of disk. A SATA disk pool will have one quality of service and SAS another (arguably the difference in speed between these drives also provides service differences). In the next release, expected in a month or so, Cohen said, Sanrad will add the ability to prioritize LUN by LUN. Sanrad is also releasing an API so users can set policies and thresholds for dynamic monitoring systems.
The product is currently in beta with partners and will become available at the end of the quarter. Sanrad is formally announcing it in a couple of weeks. QoS will be a free feature included with Sanrad products and a free upgrade for existing users.
Riverbed held a roundtable discussion about the company (and some other topics) with journalists last night at Boston’s Oceanaire restaurant in Government Center. I sat next to Eric Wolford, Riverbed’s SVP of marketing and business development, and he opened a conversation by asking me “What’s hot?”
“Well,” I told him, “these days I sometimes feel like I’m writing for SearchSSD.com rather than SearchStorage.com.” I didn’t expect Riverbed to be getting into the solid-state disk game, but Wolford said there’s probably a place for SSDs in at least some of its WAN optimization products, too.
Wolford said some large Riverbed customers use Steelhead devices on both sides of the wire for replication. At extremely high bandwidth (OC12 and above), Wolford said SSDs could help keep up when large volumes of data hit the devices’ disks simultaneously.
“With large data center-to-data center replication, they sometimes need so many spindles there’s an opportunity for solid-state storage,” he said.
But he doesn’t see SSDs replacing spinning disk systems, for Riverbed or the industry at large. “It’ll give us a new high end,” he said. And, he added, “an enormous amount of our business is at the T1 level and there’s really no opportunity for it there.”
Wolford also gave me an update on the Atlas primary storage dedupe product Riverbed was originally going to ship this year but recently pushed out until 2010.
“We got critical feedback from alpha customers where they want to deploy [Atlas], but don’t want dependency on the Steelhead appliance.” Wolford said. So Riverbed is working on bundling the Steelhead functionality into the Atlas product itself.
Atlas will sit out of band, he said, “to the side” of the array and perform post-process dedupe. Wolford says customers are hot for primary storage data reduction, but most vendors still can’t deliver it at speeds fast enough for primary storage. “If the device is out of the path of hot data, the performance burden isn’t as extensive,” he said.
A recent report by Andrew Reichman at Forrester Research showed that among 124 surveyed IT decision-makers, incumbent vendors and Fibre Channel still dominate the in-use storage systems supporting VMware deployments. But, according to Reichman and another analyst specializing in VMware, the Burton Group’s Chris Wolf, that doesn’t mean it’s necessarily how things should be.
According to the Forrester Report, users should “pick a vendor that offers thin provisioning, has deduplication on the road map, and has documented best practices in virtual environments.”
Some of these things, Reichman says, are harder to come by than you might think. “Especially clear best practices–it seems like vendors tiptoe around it, saying, ‘we can do whatever you need!’ and customers need more clarity,” he said.
Reichman also said he wanted more storage vendors to offer management console integration with VMware’s vCenter, the way Xiotech does with its Virtual View.
On the management console front, the Burton Group’s Wolf added that vendors who sell backup should be looking to consolidate VMware data protection features into the same software framework as array-based or network-based backup mechanisms.
“That integration is going to be important for backup products supporting a virtual environment,” he said.
Reichman’s survey found that most shops are sticking with an incumbent vendor for server virtualization deployments, and that that vendor is most often EMC. But Reichman also said that the survey is probably capturing the first wave of production deployments of VMware. As those deployments grow and become more complex, and as more storage vendors add new features to their products specifically to support virtual servers, users might be compelled to take a fresh look.
When they do, Reichman’s report urges users to consider Ethernet first rather than Fibre Channel, though he said Ethernet adoption may be driving primarily by Microsoft’s Hyper-V virtualization software rather than VMware, “which tends to either be protocol-agnostic or Fibre Channel-centric,” he said.
Wolf said there are a couple more virtual server-focused features he’d like to see vendors add as they try to market storage devices for virtual server support. One of them is array-level primary storage compression and dedupe, currently only offered in a handful of places like NetApp’s FAS systems, EMC’s newest Celerra products, and for nearline/archival file storage by startups StorWize and Ocarina. The other is more efficient deployment consolidation for things like patches on multiple virtual machine images.
“So you could deploy a patch one time and any dependent images automatically update as well–that’s the level of intelligence I’d like to see in the arrays,” he said.
According to a notice posted on Facebook’s official blog, a group of disk drives (a RAID group?) on what sounds like a clustered storage systems failed en masse over the weekend, causing 10 to 15% of user-uploaded photos to Facebook not to be available.
You may have noticed in the past day that some photos aren’t appearing or are displaying a “question mark” graphic when you go to view them. We have experienced some problems with our photo storage that affected between 10 to 15 percent of already uploaded photos. Don’t worry: Your photos are safe, and we are working to make them available again as soon as possible. We’ve already repaired about one-third of affected photos and expect to complete repairs on another third tonight.
Here’s what happened, and what we’re doing to fix the problem: During an otherwise routine software upgrade on Friday night, we ran into some problems with our photo storage and a few of the hard drives where we store photos apparently failed all at once. We’re trying to fully understand what happened, since simultaneous hardware failures like this are rare.
As high-profile sotrage outages go, this one doesn’t seem to be as severe as it could have been, at least not compared to other Web 2.0 services disasters like ma.gnolia, which wasn’t able to recover users’ bookmarks when its backups failed in January. According to Facebook’s post, users will not lose their pictures while they try to get the problem diagnosed and repaired, but won’t be able to view them until sometime next week–
We still have all your photos because we store them in a way that maintains multiple copies of the data in case of hardware failures like this. However, even though your photos are safe, we can’t serve photos off the affected storage volumes until they’re repaired. We’re working on them right now, but it will take some time because there’s so much data on them and the repair process largely involves copying huge amounts of data to new drives. This is why some photos aren’t showing up right now.
We’re restoring photos as we repair the hard drives, so some should be working again today and we should be back to normal by early next week. New photo uploads will continue to work properly during the repairs, because we write them to different storage volumes. Thanks for bearing with us while we return things to normal.
Storage Twitterers are skeptical about the cause of the problem. Tim Masters, Co-Founder of StorageMonkeys.com, wrote “Recovery will take until “early next week” after a “hard drive failure”? Wish I had that kind of SLA internally….most of us don’t get the luxury of a week to recover a LUN or a disk shelf…”
Bloggers who aren’t hard-bitten storage guys, meanwhile, had some praise for Facebook’s handling of the issue. “It’s good to know that Facebook maintains backups of all your data for situations like this…” wrote Adam Ostrow at Mashable.
Meanwhile, this isn’t the only tale of consumer-facing storage horror to surface on the Internet today. Gizmodo also reported the saga of Nicole, who was allegedly done wrong on the backup front by Best Buy’s Geek Squad.
“Best Buy charged Nicole $99 to backup her data but then replaced her hard drive without backing up a single byte,” Gizmodo’s Carey writes. “Nicole’s service contract clearly stated that Best Buy would perform the backup before any other service. Now Best Buy is claiming that her old hard drive is their property and that she has no right to the data that they failed to backup or restore.”
To me, Best Buy reserving some kind of property rights on the disk drive sounds like code for “it’s gone to our after-market resale disk drive repository in the sky, and we don’t know where it is.” I don’t think they’re witholding the information deliberately or maliciously (why voluntarily create a PR problem like this one?), but I also don’t think Nicole’s getting her data back.
With more and more digital data protection issues like this one falling into the laps of consumers, we are probably going to eventually–after a long, slow process of learning by painful experience–see an approach to this stuff more like that of enterprise storage and backup experts, none of whom I can imagine uploading a photo to Facebook or bringing a computer hard drive in for service anywhere without making their own backups first.