Disk drive maker Seagate Technologies last night reported its earnings for its fiscal second quarter of 2010 (ending on Jan. 10), and the results follow on IBM’s storage sales upswing with continued reports of an enterprise storage spending rebound.
“Over the course of calendar year 2009 the technology industry has improved faster than the broader economy and the storage sector has outperformed almost every other sector in technology,” CFO Stephen Luczo said on the company’s earnings call. “As a result, the demand for storage continued to accelerate throughout the calendar year.”
Still, Seagate was not planning for an economic rebound during the quarter and officials said it came as something of a pleasant surprise to see stronger than forecasted demand. The company’s top line revenue for the quarter was $3 billion, a 33% increase year-over-year and a 14% increase sequentially. Net income was $533 million, and the company was able to generate $753 billion in operating cash flow and repaying $246 million in debt, which Stifel Nicolaus Equity Research analyst Aaron Rakers wrote in a note to clients this morning was the biggest quarterly cash flow generation the company has achieved since his firm began tracking it in 2001.
Wall Street analysts, many of which were also pleasantly surprised by earnings from Seagate above their estimates, are now looking to see this boon for Seagate ripple out to its enterprise storage OEMs, especially EMC, which reports calendar first-quarter earnings Tuesday.
During its conference call to report fourth-quarter earnings last night, IBM reported its storage sales were up 1% year-over-year, in line with the cautious optimism about storage sales that has been growing over the last few months.
Specific product highlights in storage included Tivoli software and the XIV disk array. “Tivoli storage continued its robust growth as customers manage their rapidly growing storage data,” said CFO Mark Loughridge in prepared remarks on the earnings call. “Data Protection as well as Storage Management grew double-digits with broad- based geography and sector growth.”
Loughridge added later, “we added more than 130 new customers to our XIV platform in the fourth quarter and 400 since the acquisition.” IBM also claimed to have taken share in both the disk and tape markets, even though tape revenue declined 10% for the quarter.
Stifel Nicolaus Equity Research Aaron Rakers pointed out that the 1% year-on-year gain doesn’t tell the whole story. “This implies a sequential increase of [about] 55%, which compares to an average sequential increase of [about] 36% in [the fourth calendar quarter] over the prior 7 years,” Rakers wrote. “IBM estimates that it had gained [one percentage point] of share in the storage market during [the fourth calendar quarter of 2009].” Rakers compared this with his estimate that competitor EMC Corp.’s Information Infrastructure business grew 19% sequentially last quarter. EMC reports earnings results next Tuesday.
EMC launched a news midrange Clariion model today, the High Density Clariion CX-4.
The new storage frame can hold up to 390 disks in three rack units and supports 2 TB 7200 RPM and 5400 RPM SATA drives, or a combination of SATA drives and up to 60 solid-state drives (SSD).
The new Clariions are 2U higher and five inches deeper than the standard Clariions, and let customers build larger capacity configurations in a smaller footprint. For instance, a standard Clariion CX 4-960 with support for 1 TB SATA drives will take up six racks to store 945 usable TBs. The High Density units can store the same amount in three racks.
The Clariion announcement itself wasn’t anything all that earth-shattering, but it led to an interesting discussion with Enterprise Strategy Group analyst Mark Peters that I think bears summarizing here.
Thoughts on “high end” vs. “midrange”
I spoke Monday with a Pillar Data Systems reseller about Pillar’s new Axiom 600 controller. He said Pillar had recently come up against Symmetrix V-Max in a deal. Pillar lost, but the reseller claimed that was due to EMC’s brand recognition rather than a technical disadvantage for Pillar in this environment.
At first, the comparison between EMC’s high-end disk array and Pillar’s Axiom, which has lived squarely in the midrange, might seem a jarring one. While a fully configured Axiom 600 can scale up to over a petabyte and a half, a single V-Max engine pair can address that much capacity, and scale up from there. V-Max also offers more bandwidth, at 8 Gbps FC while Axiom uses 4 Gbps FC, and mainframe connections through FICON, which most midrange arrays, including Pillar, don’t offer.
But this kind of comparison isn’t nearly as much of a stretch as it might’ve been even four or five years ago, when “midrange” or “low-end” meant “stripped of certain features and functionality.” “It used to be when you wanted a feature that was really complex or clever, you would get a high-end system,” said ESG’s Peters.
These days, the features advertised for Axiom and V-Max have quite a bit of overlap, from thin provisioning to quality of service to tiered storage data migration and solid-state drive support. Since EMC came out with the V-Max Symmetrix model, a successor to the monolithic DMX series, the two have also, had a broadly similar architecture, a matrix made up of separately scalable performance (“engines” for EMC, “slammers” for Pillar) and capacity nodes (“capacity” for EMC, “bricks” for Pillar).
It also used to be that a high end disk array could be identified by the amount of cache available to boost performance, as well as intelligent algorithms to manage placement of data on that disk. With the advent of Flash for enterprise consumption, vendors like Pillar are able to offer the kind of cache capacities that used to be available only in the highest-end arrays — 192 GB with the Series 2 Axiom 600 controller.
The overlap between midrange and high end is also increasing within EMC’s own product line — the Clariion, which it calls a midrange array, can now achieve capacities well into the Symmetrix range, especially with the high-density model. Both products also offer Flash support, QoS, thin provisioning, drive spin down, etc.
Of course this doesn’t mean that people will begin speaking of the two categories interchangeably, at least not tomorrow or in the near future. But Peters noted that users may emphasize new types of purchasing criteria now that the old lines of sheer capacity and horsepower are beginning to blur. This is where, as the Pillar reseller mentioned, brand recognition and vendor cachet will become more important than ever. “People are buying more than a product,” Peters said. “They’re buying interoperability, a sheer number of service engineers, money going into a research lab, and global support.”
The size of the environment will also continue to play a role, but this will find more of an emphasis on risk management depending on the size and profile level of the business involved. “Different size companies still have different bases on which they make purchasing decisions, and one of those is risk.”
It’s also important to recognize that different companies will use the same technical term to describe features that might still be different under the covers. “All the vendors will now say they have thin provisioning and remote replication,” Peters said. “But you always have to be careful in this game to look out for semantic similarities that may not mean the features are exactly the same.”
So the story as told by the Pillar reseller will probably continue to be retold in various forms.
As an example, Peters turned to a car analogy. “Hyundai and Mercedes might both say they have four-wheel drive,” he said. “And it may be that Hyundai’s really is just as good as the Mercedes, but I don’t necessarily trust them. That’s similar to a storage buyer [today].”
HBA vendors QLogic and Emulex say sales last quarter exceeded their expectations, and Wall Street analysts are predicting good numbers for storage systems vendors. But not every storage vendor finished 2009 strong.
FalconStor Software said it finished 2009 below its forecast due to a poor fourth quarter. FalconStor’s revised guidance is for annual revenue of $88.5 to $89 million compared to previous guidance of $96 million.
FalconStor CEO ReiJane Huai blamed the shortage on problems with his company’s OEM business. FalconStor says OEM partner H3C generated more than $2 million less in licensing revenue than expected following Hewlett-Packard’s acquisition of H3C parent 3Com. Oracle’s delayed closing on the acquistion of FalconStor OEM partner Sun Microsystems also resulted in less revenue than expected.
Perhaps the biggest problem for FalconStor was revenue from its largest OEM partner, EMC, also came in below expectations. That’s despite analyst predictions that EMC beat its overall revenue forecast for the quarter with strong sales of its Symmetrix V-Max enterprise systems and Data Domain deduplication appliances.
Along with EMC, Wall Street analysts expect good quarters from other storage system vendors large and small. RBC Capital Market analyst Amit Daryanani predicts NAS vendor Isilon not only beat expectations last quarter but broke even for the first time. 3PAR also exceeded expectations, according to Wedbush Securities analyst Kaushik Roy.
“Industry contacts indicate that all storage systems vendors appear to have met or beat internal expectations,” Roy wrote in a note to clients issued today.
After enjoying the last couple of hours of 2009 with my family, I thought how fitting it would be to end the year with a post!
I’ve been incredibly busy this year and my lack of posts really shows it, one would think I forgot my login or something. In that time, however, there has been no lack of great topics to talk about, and here are a couple that lit my candle in 2009:
Consumer computing is fast approaching levels of enterprise computing, making corporate citizens more computer savvy, and making IT management work harder to keep things humming along. Mark my words, you are going to see quite a bit of work-slash-home networking products come to the market in 2010, specifically around data protection and storage that are going to tout “office integration” or “workplace integration”.
The mobile computing and storage space and the rate at which consumer mobile devices are making inroads into the datacenter is something that I’m paying close attention to. Specifically, the Android OS and the Nexxus One and Droid hardware–these devices are significant to enterprise computing because they take the whole idea of a netbook to another level!!
If you remember the Toshiba Libretto, these new devices are what the Libretto could have been. The mobile phones are both fast and offer the ability to the savvy user to essentially replace their office with a hand-held device. And for those with super security conscious IT departments, there are companies like Good Technologies “Good for Enterprise” that allows an administrator to remotely wipe Exchange data from a Droid in a fully encrypted container so “security” can’t be used as a reason not to support the platform.
Take this a step further, I’m sure you’ve been asked at least once already to store backups of a user’s phone to tape, or better still seen a backup of a user’s phone on their shared drive. If you haven’t yet, you’d better get ready for it!
Virtualization has been rampant, and I predict it will be in my toaster within the year, allowing me to virtually toast multiple slices of bread simultaneously and store the trend info on how many times I’ve burned my Eggo’s on SSD. While I’m being flippant, we may actually see a hypervisor capable toaster or fridge or washer, and apparently I’m not the only one that thinks so–in an article on a New York times blog Sehat Sutardja has been quoted as saying: “[Virtualization] will become pervasive…It will be used in everything from TVs to IP phones to digital picture frames to washing machines.”
If Android is in a washing machine, then I have Linux and everything that is available to Linux in that washing machine … just think of the Folding at Home scores you can rack up if we linked the neighborhood washing machines up!! Think about all the data that will need to be stored when they start tracking wash cycles of a particular garment via RFID!!!
On a more serious note, the age of operating systems for small to midsized branch office network attached storage devices, as well as smarter switches and other infrastructure devices, is upon us. Microsoft is not standing still — Windows 7 is small and much faster than its predecessors (why do I feel like that is a paraphrase of the architect from the Matrix?) and is definitely a viable OS for these devices, so now we have raw Linux; Moblin is making its way onto the stage with Android, among others. And remember, all these things have one thing in common: they need somewhere to store the data they produce.
Speaking of virtualization, the march of development in the virtualization management software space is going to pick up steam in 2010, and there are going to be some casualties. The winner will be the one that allows truly heterogeneous management of my virtual data center from storage up, and after taking a look at Cisco’s offerings I’m going to be paying very close attention to what they do. I’ve been digging really deep into vSphere, and it’s jam packed with goodies. Orchestrator is a little gem — properly executed, it can add a good bit of speed and agility to any rapid provisioning initiative you may have, BUT be careful, with a poorly orchestrated (you knew that pun was coming didn’t you?) workflow that shiny new NAS with 400TB of storage will be gone in a day.
Enterprise Storage has continued to move forward at a blistering pace, with drives breaking the 2 TB mark, and some serious performance increases in the form of SSDs, Sata III and Fusion IO putting Flash directly on the bus. I look at price in this space. The price of SSDs will get lower and lower and the performance will continue to go up. We will see the proliferation of end-to-end solutions mixing the two, a la Exadata and the Sun 7000 line. Take a look at what Fusion IO is doing in the high end gaming market! It’s funny but the consumer machines of today are looking more and more like the specialized workstations and servers of yesterday.
I see some things that we really missed the ball on last year, too. Convergence really isn’t here yet. The drive to make a device the “media hub” and then backing all that stuff up is getting there, but hasn’t quite caught on yet. I think once it gets closer it could drive an entire wave of datacenter build outs to handle it. I can also see telcos getting into the act a little more aggressively, offering storage services at their major POPs to enable some of the consumer products to work properly. This has some unintended but positive side effects for the small to medium business because they will have ready access to fast, reliable online storage. Well, at least in theory. I’m still waiting for it to happen!
Cloud storage also hasn’t really shaped up to be the game changer I thought it was going to be. I like the idea of not owning infrastructure and I’m a really big fan of the rapid provisioning/de-provisioning model, but I just don’t see the bandwidth needed for that to work here in the US the way it really should. In Korea and various places who’ve deployed infrastructure recently I see cloud as a viable model, but not here.
With that, folks, I’m back and rarin’ to post!!!
(6:22) i365makes cloud data storage connection with CA Recovery Management
Also check out our Product of the Year Finalist List, published yesterday.
A website went up today taking registrations for a web conference being put on Jan. 26th by NetApp Inc., VMware Inc., and Cisco Systems Inc., a coalition that looks similar to the VCE alliance announced with VMware, Cisco and EMC Corp. last October. Except in this case the storage player is EMC’s archrival NetApp.
According to the site, the webcast will cover “what we’re introducing to help you imagine and achieve virtually anything with one elegant solution.” It will feature Tony Bates, senior vice president and general manager of Cisco’s Service Provider Group; Tom Georgens, CEO of NetApp; and Paul Maritz, CEO of VMware.
This looks like another in a line of “stacks” we’ve seen put together by large vendors and their partners in the last six months or so; just yesterday HP and Microsoft Wednesday disclosed alliance that will also focus on infrastructure bundles to support virtual servers.
The Cisco-NetApp-VMware troika also serves as a reminder that none of these alliances are exclusive, and we’ll see vendors making deals with enemies of their partners. When it comes to storage relationships, don’t expect monogamy.
There’s been some commotion this week about the announcement from Google that you can now use Google Docs to upload and manage any file type, with support for uploads up to 250 MB in size, total free storage space of 1 GB, and additional storage for $0.25 per GB per year. With those prices, Google may be offering the cheapest cloud storage capacity available anywhere.
Stephen Foskett, consulting director for enterprise cloud storage player Nirvanix, has tracked this closely and has a pretty good rundown of the discussion about whether or not this is actually Google’s long-rumored GDrive storage service. Personally I think the point is moot — whether it’s called GDrive or not (Google is adamant in public statements that it is not), this is still an online file storage service, complete with a file sync option through a partnership with Memeo. Same dif.
Foskett also points out for enterprise users to get support, the cost is $3.50 per GB per year, “much more in line with existing offerings” from Amazon, Rackspace and others.
So, what’s the big deal? For one thing, despite the fact that new cloud computing companies seem to be popping up like mushrooms, household brands go a long way in getting people’s attention. Google’s approach may not break new ground for cloud file storage, but it will gain cachet simply because it’s Google.
For consumers, the fact that this is being done through Google Docs and at such a cheap price may take some share away from Amazon’s S3, which requires either API integration or a third-party interface to provision its storage buckets and charges the same prices for capacity regardless of the type of user. S3 also charges for bandwidth while Google Docs doesn’t.
Dell is taking a bottom-up approach to 6 Gbps SAS, beginning with its low-end PowerVault MDS1200 and MD1220 direct attached storage (DAS) systems.
Dell launched the two 6-gig SAS systems today along with three 6-gig SAS controllers for storage and Dell servers.
The new generation of SAS systems have double the bandwidth of the 3 Gbps SAS that has been on the market since SAS began replacing parallel SCSI in 2005.
Last year, Hewlett-Packard rolled out its StorageWorks D2000 external arrays with 6 Gbps SAS.
Dell senior storage product manager Howard Shoobe wouldn’t say when he expects 6 Gbps SAS support for Dell’s EqualLogic iSCSI SAN or the Clariion storage arrays it co-markets with EMC, but many people in the industry believe 6-gig SAS will threaten Fibre Channel as the dominant high-end disk interface.
“With 6-gig, we see SAS becoming more compelling,” Shoobe said. “This is an important step and the foundation for the next generation of storage products.”
The MD1200 is a 2u box that holds 12 3.5-inch drives or a combination of 3.-5-inch and 2.5-inch drives. It expands to 96 drives with additional enclosures. Dell positions the MD1200 for applications such as disk backup, email, and streaming media.
The MD1220 is also a 2u system but holds 24 2.5-inch SAS drives and expands to 192 drives with eight additional enclosures. Dell sees the MD1220 being used for more I/O-intensive applications such as large databases and Web serving.
Both systems also support SAS interface SSDs from Pliant Technology. The PowerVault MD1220 costs $5,637 and the MD1200 is $5,145.
The MDS1200 and 1220 use the new PERC H800 6-gig SAS controller, which supports redundant pathing and I/O load balancing. With redundant pathing, both cables from the controller connect to the DAS system, so if one cable gets disconnected the system will continue to run. Dell is also bringing out PERC H700 and H200 controllers for 11G PowerEdge servers.
Last week, we wrote about some early storage industry predictions for 2010 from analysts and users, but more industry experts have since checked in with their outlooks for the coming year. Here are a few of the themes from these latest predictions from Symantec Corp., Enterprise Strategy Group (ESG) and Forrester Research:
SMB vs. Enterprise: opinions vary
One of the more interesting common topics explored by these reports is the outlook for technology adoption among different sizes of business. Beyond that similar focus, however, there are some significant differences in how different organizations see the market developing.
According to Symantec’s State of the Data Center survey, midsized enterprises are “leading the way” with adoption of new technologies like cloud storage. According to Matthew Lodge, senior director of product marketing for Symantec, the midsize enterprise data center, defined by Symantec at 2,000–9,999 employees, has a “more intense” data center than larger counterparts. “They’re deploying more applications [than larger companies] and expecting major changes in 2010,” Lodge said. He says new applications are driving more stringent availability requirements, while staff remains tight among these organizations.
ESG, meanwhile, has a different view of what makes a midsized business in its preliminary 2010 spending research, according to research director John McKnight. “Generally speaking…[midsized organizations according to ESG’s definition] almost always lag the enterprise in adoption of all new technologies,” McKnight wrote in an email to Storage Soup. “There’s a lot of conventional wisdom that cloud/SaaS adoption will be led smaller organizations. That may be true in the small business segment (i.e., the “S” in “SMB”) but our data has never shown much of a discrepancy between midmarket and enterprise when it comes to cloud.”
Pent-up demand? Capex vs. Opex
While ESG and Symantec surveyed enterprise storage managers, The Forrester report takes a higher-level view of the overall global IT market, predicting that
The US IT market will grow by 6.6% in 2010 (twice the 3.1% growth in nominal GDP), following a drop of 8.2% in 2009. The global IT market will rise in 2010 by 8.1% in U.S. dollars, and by 5.6% in local currencies. Growth will start slowly in 2010 but pick up steam later in the year, with computer equipment (especially PCs and storage) and software leading the way, and IT consulting services following.
Again, there are differences in outlook among 2010 storage outlook research when it comes to how much spending might rebound in 2010 from 2009 levels, as well as whether the rebound will be focused on capital expenditures (capex) or operational expenditures (opex). While IDC’s 2010 predictions reported on last week predicts IT will undergo “a shift away from capital cost efficiencies to operational cost efficiencies” and the development of “a business-level bias in most companies toward virtualized and/or services-oriented offerings for storage solutions,” from ESG’s perspective that transition happened last year.
While IDC predicts continued spending constraints leading to evaluatoin of services offerings among storage pros, ESG sees justifications for purchases this year trending to “more than just cost-cutting”, according to ESG analyst Mark Peters. “We’re seeing the beginning of a shift toward getting back to business in IT,” he said. “There’s a bit of a softening and cautious optimism.”
According to the preliminary ESG research report, of approximately 500 respondents, 52% said IT spending would increase from the previous year, as opposed to 43% in 2009.
Meanwhile, Symantec’s survey respondents still see budget constraints in 2010, particularly when it comes to operational costs like staffing. “Half of all enterprises are somewhat/extremely understaffed,” Symantec’s report reads. “Networking, virtualization and security [are] the most understaffed. [The] biggest issues [are] Budget and finding qualified applicants. 76% have the same or more open job requisitions this year.”
Unified or smart computing — another word for efficiency?
Forrester’s report identifies a set of concepts it calls Smart Computing as leading the next wave of tech innovation this year, described as “leading-edge technologies like service-oriented architecture (SOA), server and storage virtualization, videoconferencing, unified communications platforms, business intelligence and analytics, and new process apps built on digital business application principles.” Like ESG, Forrester forecasts a return to business priorities as opposed to simpler cost-cutting justifications for purchases:
As 2010 progresses and the memories of the 2009 downturn fade, CIOs will start to pay attention to the ways that a new generation of technology can help achieve better business results. In this period of tech innovation and growth around Smart Computing, reference clients and ”low totalcost- of-ownership” marketing become less important than media coverage and buzz in the blogosphere about how your solutions have helped companies achieve breakthrough business results. Companies will want technologies that put them ahead of competitors, not technologies that are the same as what competitors are using.
ESG uses the term “unified computing” to describe the server / software / storage IT stacks offered by larger vendors last year. According to McKnight’s email,
one of the only areas in our 2010 survey where the midmarket shows increased interest compared to enterprises (to tie this question to the one above) is in the area of unified computing – 17% of midmarket respondent put unified computing (i.e., integrated server/storage/networking stack) in their top 10 list of 2010 IT priorities compared to 11% of enterprises. Again, there’s a fair bit of conventional wisdom out there that says that these new unified computing platforms will be most attractive to large enterprises and service providers, but this data actually corroborates discussions ESG has had with a number of end-users, most of whom have indicated that they view the real niche for unified computing as being smaller organizations or smaller locations (i.e. ROBOs) of big firms.
Disaster Recovery–IT’s New Year’s Resolution
The social networking apps I use to communicate with friends have been full of complaints this month from regular gym-goers about the “New Year’s Resolution” crowd they expect to vanish by February. I’m reminded of that kind of recurring but short-lived New Year’s resolution when, every year at about this time for at least the last three years, the discussion has turned to disaster recovery.
Eighty percent of respondents to Symantec’s data center survey said they were confident in their DR plan, but as with Symantec’s SMB survey last September, that confidence isn’t necessarily supported by facts. The data center survey found that one-third of respondents do not have a documented DR plan, that DR plan hasn’t been re-evaluated in the last 12 months, and a significant proportion don’t address cloud computing (41%), remote offices (28%) or virtual servers (23%).
So why this perennial focus on disaster recovery seemingly without significant improvement or change in practice? “The inference we’re making is that it has to do with staffing problems,” Symantec’s Lodge said. “It increases expectations on the vendor side to help with [automating DR].”
“Most organizations don’t have a formal DR plan that’s regularly tested,” ESG’s McKnight adds. “There’s a difference between DR as a business priority as opposed to how it translates into specific formal processes and technologies. Having a plan and testing and improving on that plan are different questions.”