Over the last few weeks, I’ve written a couple of stories about how the current global economic crisis is being projected to impact the storage market. While users say they don’t anticipate much of a change in their daily life–storage budgets are lean and adoption of products in the storage market is conservative as it is–financial analysts and storage experts see a much bigger impact for storage vendors from the collective effects of declining storage spending growth.
However, there’s one area where, if you’ll pardon the phrase, a potential silver lining has been spotted: cloud computing. One theory is that less available capital or credit for capital outlay makes the economies of scale and zero-hardware options offered by cloud vendors more attractive. But another theory is that in the current economic climate, users become more risk averse than ever, and the cloud remains a new, relatively bleeding-edge phenomenon.
Today there have been some more analyses released about the possibilities for the cloud market, one a cloud computing spending forecast from IDC and the other is an analysis of the barriers to cloud entry by Gregory Ness for Seeking Alpha.
With or without economic downturns, according to Ness, the nature of today’s network infrastructure is a hurdle to widespread cloud deployment (not to mention the bandwidth of the average data center’s connection to the wider Internet):
Certainly there will always be a business case for elements of cloud, from Google’s pre-enterprise applications to Amazon’s popular services and the powerhouse of CRM, HR and other popular cloud services. Yet there are substantial economic barriers to entry based on the nature of today’s static infrastructure.[…]Until the current network evolves into a more dynamic infrastructure, all bets are off on the payoffs of pretty much every major IT initiative on the horizon today, including cost-cutting measures that would be employed in order to shrink operating costs without shrinking the network.
Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise’s ability to monetize past investments. Increasingly complex networks are requiring escalating rates of manual intervention. This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.
IDC acknowledges that the growth opportunity is “in its infancy” but says the marginal growth will be irresistible to vendors:
Of the $383 billion customers will spend this year within the five major IT segments noted above, $16.2 billion – or a mere 4% – will be consumed as cloud services. By 2012, customer spending on IT cloud services will grow almost threefold, to $42 billion.By 2012 – based on a conservative forecasting approach…customer spending on IT cloud services will grow almost threefold, to $42 billion, accounting for 9% of customer spending.
On one level, one could argue that – in spite of the all the buzz about Cloud Computing and Cloud Services – this model will not even crack 10% of IT spending four years from now. And therefore, one could reasonably ask: why all the fuss?On one level, one could argue that – in spite of the all the buzz about Cloud Computing and Cloud Services – this model will not even crack 10% of IT spending four years from now. And therefore, one could reasonably ask: why all the fuss?
One reason IT suppliers are sharpening their focus on the “cloud” model is its growth trajectory, which – at 27% CAGR – is over five times the growth rate of the traditional, on-premise IT delivery/consumption model. Spending on IT cloud services is growing at over five times the rate of traditional, on-premise IT.As noted in our recent user survey, this rapid growth is being driven by the ease and speed with which users can adopt these offerings, as well as the cloud model’s economic benefits (for users and suppliers alike) – which will have even greater resonance in the current economic crisis.
Even more striking than this high growth rate, is the contribution cloud offerings’ growth will soon make to the IT market’s overall growth. By 2012 – even at only 9% of user spending – cloud services growth will account for fully 25% of the industry’s year-over-year growth in these five major segments. In 2013, if the same growth trajectories continue, IT cloud services growth will generate about one-third of the industry’s net new growth in these segments.
It will be interesting to see how things actually play out.
FalconStor Software today issued a press release saying it will miss its previous revenue estimates for the year, due to a bad third quarter. FalconStor’s new estimate for the quarter that ended Sept. 30 is a range of between $19 million and $19.5 million, in contrast with Wall Street’s $22.78 million consensus.
FalconStor also lowered its guidance for the full-year revenue to between $85 and $87 million, as opposed to its earlier projections of $100 to $104 million. In a statement, CEO ReiJane Huai blamed the wider economy for the shortfall:
“The difficult economic conditions at the end of the third quarter resulted in many companies freezing or lowering their information technology spending, which caused our revenues for the quarter to fall short of our projections. We remain confident in the capabilities of our products. But given the continuing difficult global economic conditions, we are projecting that revenues for the fourth quarter will also be below our previous expectations.”
“They have a much smaller margin of error,” Enterprise Strategy Group analyst Brian Babineau said of FalconStor. “A couple of deals don’t get done and they miss because we are only talking about $20 million a quarter in revenue.” FalconStor declined comment on whether any particular product line or individual product had been most affected by this miss. The company will report its earnings Oct. 28.
Analysts, investors and perhaps customers will watch closely over the next few weeks when storage companies report earnings, with guidance for this quarter and next year of particular interest. In a recent BusinessWeek interview, NetApp CEO Dan Warmenhoven said his company will also miss its revenue growth projection for the year. Worries in the market about IBM due to its financing business were assuaged somewhat by the pre-announcement of a 20% earnings increase for the company and a reaffirmation of its revenue targets for the year last week.
“I think most companies would have preannounced [shortfalls] already,” Babineau said. “I do not expect any [more] third quarter pre-announcements, however, I do expect some very conservative gudiance and commentary for all IT in regards to the calendar-year fourth quarter.”
IBM spent some time last summer revamping its storage systems, with a new midrange DS5000 system and the first version of the XIV under its brand. Big Blue hasn’t been neglecting software, though. Today IBM launched an entry level version of its SAN Volume Controller (SVC) that uses its same storage virtualization software on a less expensive server. And coming soon: data deduplication with Tivoli Storage Manager (TSM).
IBM System Storage SVC Entry Edition is limited in scale (up to 60 disk drives) and runs on a System x3250 single-socket server instead of the dual-core System x3550 that the standard SVC runs on. The idea is to make it more affordable for smaller companies – the entry edition is priced per disk drive with a starting configuration costing around $35,000 for five drives. The regular SVC is priced per usable capacity, starting at $50,000 for 1 TB. Customers who buy the entry edition can convert to the standard SVC later.
“This is not an SVC Light,” said Chris Saul, marketing manager for SVC. “It does everything our existing SVC does.” That includes the thin provisioning IBM recently added. Saul said he expects thin provisioning to be attractive to SMBs. IBM also added SVC support for IBM’s XIV and DS5000, IBM’s Diligent VTL deduplication gateway, HDS Universal Storage Platform and HP XP20000/XP24000 arrays and works with Microsoft Hyper-V. The entry level SVC will be available Nov. 21.
IBM is also planning dedupe in it’s the next version of its TSM backup software, due out around December or January. “TSM will have dedupe with TSM server in the next release,” said Kelly Beavers, IBM’s director of storage software. Bearvers said dedupe will be at the TSM server layer. “The next step is data dedupe on the client side, then with Files X software for remote office,” she said.
There are some potential inferences that can be made from two moves Symantec Corp. made last week: the acquisition of MessageLabs and the launch of Veritas Cluster Server One. However, for now clear answers as to whether or not those inferences are correct are not forthcoming.
MessageLabs is partnered with Fortiva to offer email archiving SaaS, so I wondered if the acquisition might mean that Symantec will get into that kind of offering as well. SearchSecurity.com reported that MessageLabs CEO Adrian Chamberlain will be heading up a new SaaS group at Symantec, though Symantec officials also told SearchSecurity they won’t be SaaS-enabling all products. This remains an open question for now, as a Symantec spokesperson told me that product roadmaps will be decided after the acquisition closes, which might not happen until year end.
Symantec also launched VCS One, with the goal of allowing organizations to keep active farms of virtual servers running at a disaster recovery site, as well as recover tiered applications with dependencies intact. “Right now, this process is dependent on a lot of tribal knowledge in the heads of individuals who know the right order and design scripts to run this kind of recovery,” said Mark Lohmeyer, vice president and general manager of the VCS product group at Symantec.
If any of that sounds familiar, it might be beacause this past May, VMware and its storage partners launched VMware Site Recovery Manager, allowing VMware’s VirtualCenter to execute commands against storage arrays at primary and secondary sites during recoveries and enable VirtualCenter-generated metadata about virtual machines to be replicated, along with system and application data. In part, SRM is designed to help server virtualization customers automate their disaster recovery checklists, which many of them keep on paper and check off manually.
Meanwhile, Symantec has been among the most outspoken of storage vendors about friction with the server virtualization giant, and at Vision this year took VMware rival Citrix XenServer under its wing and into its product line, claiming the resultant Veritas Virtual Infrastructure product will be a better approach than VMware’s Virtual Machine File System (VMFS) for server virtualization in large environments.
However, Symantec positions VCS One as complementary to SRM, rather than competitive with it. A Symantec spokesperson emailed me the following statement when I asked about it late last week:
VCS One is a complementary solution for VMware environments that can help improve overall availability of the environment in production, for mission-critical apps, by taking an application-centric approach to HA/DR. And, we work closely with VMware to integrate with, and leverage VMware technologies such as Vmotion (for reducing planned downtime) and DRS today, and we’re looking at how we can also integrate with SRM in the future. Finally, our solution is ideal for heterogeneous physical and virtual environments, that includes VMware as well as other platforms (which is the case in virtually every data center).
It’s important to note, though, that VCS One only supports VMware virtual machines at present, which might make the kind of competitive statements made earlier this year a bit awkward at this stage. Lohmeyer says VCS One was under development before Xen came on the scene. “[Support for Xen] will be in our very next release,” he said. Once that happens, I wonder if Symantec’s messaging might change somewhat.
Google’s Message Discovery, based on its 2007 acquisition of email archiving services vendor Postini, has a new option for archiving messages for up to 10 years for a flat fee of $45 per user per year. According to the Official Google Enterprise Blog, “currently there is a lot of confusion in the marketplace about what kind of archiving solutions organizations should pursue, and that confusion is discouraging companies from taking this necessary step to protect their business.” Hence the new deal.
I wondered what “confusion” meant there, exactly, and how lowering the price of a technology would alleviate confusion about it. Google spokesperson Bill Kee elaborated in an email:
“There is often confusion among about how much data to keep and how much data to delete. Sometimes, these decisions are made on the basis of legal and business priorities. Often, however, the decision to keep or dispose of email is governed by storage limitations, server performance issues, or cost considerations. By offering a flat $45 model, regardless of how much you store or for how long, our goal is to help customers make retention decisions that align with legal and business priorities, rather than having constraints imposed by technology and cost limitations.”
According to the blog, the company will continue to offer a one-year retention period for the existing fee of $25 per user per year. Both packages also include spam and virus filtering, policy management tools and, of course, search.
Google’s not the first email archiving SaaS vendor to drop prices. Just before Dell bought it last year, MessageOne launched a new rapid-archiving service that can be deployed quickly and costs $1 per user per month — about half the price of Google’s original offering.
Both companies have made their own statements about where those pricing changes come from, but I wonder if there wasn’t also resistance to the companies’ initial pricing from users. A recent Forrester Research report also attributed relatively slow adoption of email archiving SaaS to network latency in accessing off-site archived messages and searching them for e-Discovery.
Brocade apparently became the first storage company to see positive effects of last week’s financial bailout when it received $1.1 billion loan to help finance its $3 billion acquisition of Foundry Networks.
Brocade executives spent hours extolling the virtues of Foundry and detailing how its expansion into Ethernet networking would help the FC switch vendor at its analyst day last month, yet were grilled during the Q&A session about how they would fund the deal. Analysts worried that funding would be hard to come by with banks faltering.
Investors worried, too. Foundry shares closed at $16.26 Tuesday, below the $18.50 in cash per share that Brocade would pay in the acquisition. That means investors doubted the deal would go down.
Brocade intends to raise another $400,000 in financing, probably from a high-yield bond or convertible debt offering. Besides securing $400,000, the other remaining obstacle to a deal is the Foundry shareholder vote scheduled for Oct. 24.
The loan from Banc of America Securities, HSBC Bank, and Morgan Stanley Senior Funding seems to have increased the confidence of investors and analysts. Foundry shares opened and closed at $16.70 today, finishing up on a day when the market was down while falling short of Brocade’s purchase price.
“We believe Brocade should be able to close the deal shortly after Oct. 24th,” analyst Kaushik Roy of wrote today in a note to clients, after acknowledging “some investors were worried if Brocade could secure the $1.5B financing for the acquisition.”
Acquiring Foundry would enable Brocade to compete on a second front against FC switch rival Cisco at a time when 10-Gig Ethernet, data center Ethernet, and Fibre Channel over Ethernet (FCoE) make Ethernet more valuable in storage and in the data center.
The NetApp/Sun patent infringement lawsuit that started roughly 10 years ago (or so it seems) continues on, first with this latest dispatch from Sun’s general counsel, Mike Dillon, who was gleeful over the results of a Markman hearing in the case of Sun vs. NetApp.
A Markman hearing is designed to settle on agreed-upon definitions of technical terms in a patent-infringement lawsuit. Sun and NetApp each submitted their interpretation of the meaning of various terms under dispute, like “RAID” and “domain name.” These interpretations are called constructions.
According to Dillon,
In dispute were fourteen phrases in seven patents (four asserted by Sun and three by NetApp) that required the court to determine the meaning of terms like “Domain Name”, “Non-volatile Storage Means” and “Root Inode,” among others. Given the complexity, we were impressed when only two weeks later, the judge issued her order.
And, we were very pleased.
In summary, the court agreed with Sun’s interpretation on six of the disputed terms (two of which the court adopted with slight modification) and with NetApp on one.
While this is obviously more in Sun’s favor than NetApp’s, a Markman hearing is a pre-trial procedure. Agreeing with Sun on the terms doesn’t mean there was a decision in Sun’s favor. However, the court did make one ruling dismissing one of the outstanding patent claims from NetApp, US No. 7,200,715, or ‘715 for short, which referred to RAID.
According to Dillon:
…the Court found each of the asserted claims in NetApp’s 7,200,715 patent relating to RAID technology to be “indefinite” – meaning that someone with experience in this area of technology could not understand the limits of the claimed invention. With regard to NetApp’s ‘715 patent, the court agreed with Sun’s position that the claims of the patent are flatly inconsistent with and impossible under the teaching of the patent specification. In effect, unless NetApp appeals and this finding is reversed, the ‘715 patent is effectively invalidated in this case and against others in the future. meaning that someone with experience in this area of technology could not understand the limits of the claimed invention. With regard to NetApp’s ‘715 patent, the court agreed with Sun’s position that the claims of the patent are flatly inconsistent with and impossible under the teaching of the patent specification. In effect, unless NetApp appeals and this finding is reversed, the ‘715 patent is effectively invalidated in this case and against others in the future.
While it’s good for Sun, the original defendant, to have this claim dismissed, both companies are seeking injunctions against one another’s products as well as treble damages, and, I would imagine, a contrite apology. A dismissal is good for Sun, but not a finding that NetApp violated its patents. This dismissal amounts to a finding that nobody can really patent something as ubiquitous as RAID.
Dillon also published updated results from the Patent and Trade Office’s (PTO) reexamination of the patents under dispute in this case. The PTO already found in Sun’s favor on one patent, ‘001, back in June. Sun requested that this patent be taken off the table in the dispute, and the Markman court documents don’t show any reference to it.
Now, according to Dillon, the trial court has agreed to remove ‘001 from consideration. Meanwhile, the PTO has also rejected NetApp’s claims on two more patents, ‘211 and ‘292. ‘292 being the one that refers to WAFL. Uh oh.
…late last week, we were informed that the PTO has rejected all of the asserted claims of this patent relying on at least two separate prior art references out of the many provided by Sun. (The examiner felt that to consider the other references would be “redundant”.)
Some may recall that the ‘292 (“WAFL” technology) patent was what NetApp’s founder, David Hitz, originally highlighted on his blog as being innovative and infringed by ZFS.
While not a decisive victory for Sun (the claims still have to be addressed in court, and a dismissal of the patent again does not amount to “winning” the countersuit alleging NetApp infringes Sun), it’s certainly nothing that could be called a victory for NetApp.
NetApp’s only comment to me about this so far is, “We’re very happy with the way these matters are progressing and we continue to read Mr. Dillon’s blog with great bemusement.”
EMC’s Mozy online backup has added a new Mac edition of MozyPro to its product line. This news follows the introduction of Mozy’s first Home edition Mac client in May.
The MozyPro for Mac product, which will be available immediately, adds centralized management features for Mac servers and workstations, including the creation of groups of clients and policies that control their backups. Management in MozyPro is “very fine-tuned,” according to Steve Fairbanks, director of product management for Mozy. “You can adjust backup sets and include or exclude file extention types according to policy.” Customers an also receive reports on backup job success rates, have alerts on failures sent to an administrator, assing backup quotas and administer roles with the new software. MozyPro for Mac will also be manageable through existing MozyPro for Windows management consoles for those who have a mix of Mac and PCs in their environment, Fairbanks said.
Both the Home and Pro editions have specific features to support Mac, including:
- Support of resource forks, alias and packages
- Spotlight Integration
- Native Cocoa Framework – Graphics are all native
- Mac Help System
- Menu Bar integration
- Native Apple Installer and installation process
- Mac-specific backup sets
One customer who’s been waiting for this rollout for quite some time is Walter Petruska, information security officer for the University of San Francisco. The University has MozyHome for Mac rolled out to some individual faculty and staff members, and the central IT department has been beta testing MozyPro for Mac for months. The plan is to roll MozyPro for Mac out to workstations used by the University’s distributed IT staff “so they can get a feel for Mozy from the client side,” Petruska said. However, the full rollout to all University servers and workstations will wait until there’s a MozyEnterprise edition for Mac.
EMC was coy when it came to whether or not there will be a MozyEnterprise for Mac, saying that MozyPro will meet most customers’ needs. But the MozyEnterprise edition that’s out now for PCs allows for more advanced management tasks like silent installs, deployment without software keys, and Active Directory and LDAP support fo security. Otherwise, Fairbanks said, “there’s very little difference” between MozyPro and MozyEnterprise.
To Petruska, however, the differences are significant. “We’re waiting to make the leap to a new backup paradigm across the University until things align and we can manage all PCs and Macs as well as servers from one console with LDAP and Active Directory integration,” he said.
Right now, workgroups and departments at the university have separate backup plans, and most of those backups remain on-campus in San Francisco, which is prone to earthquakes. Petruska said he’s looking forward to “getting everyone on the same sheet of music” and sending all backups offsite to the cloud. Most of the Mac users on campus today use EMC’s Retrospect software for local backups, but Petruska said MozyEnterprise for Mac would replace them.
Meanwhile, EMC says no edition of Mozy will replace Retrospect in its product line. Rather, according to a Mozy spokesperson, Mozy and Retrospect will be integrated going forward in packages like the one announced with Iomega’s external hard drives in July.
Witness the carnage at VMWorld of a booth giveaway gone bad…(VMblog)
And if I had to guess, I’d say it’s a new disk array. A self-healing, dynamically performance-optimized disk array.
For one thing, the latest fad is for new disk arrays to be promoted in what public relations pros call a “rolling thunder” fashion, where deliberately mysterious statements are made and glimpses are given of an upcoming product until the moment of its launch. See also: Xiotech’s ISE, Oracle’s Database Machine. HDS’s “to be named” is no exception.
More clues on the HDS preview website: “Hitachi + DLB = agile, no touch, no bottlenecks formula.” My guess is that DLB means dynamic load balancing, especially since, well, everything else on the site is about dynamic load balancing.
For example, click on “View video” and some dude walks up to you, saying:
Get ready. It’s coming. What if you could improve your service level agreements for virtually any storage workload? Like you, I want the perfect formula, minimizing I/O disruption and bottlenecks. But what would that formula be? I believe it includes purchasing the minimum number of required disks to meet the performance criteria of all requests. Automatic workload management and exceptional bandwidth. Now I would like to ask, what if I give you the ability to dynamically shift I/O processing to keep workloads running smoothly? Then, what would your ideal storage environment look like?
At this point three choices appear inside the video screen:
- Minimal manual intervention required
- Minimize the risk of degradation when shifting I/O processing
- Self-healing system to overcome failure of key components
Meanwhile, a countdown clock on the site reads 9 days, 16 hours, 53 minutes, 52 seconds. In other words, Oct. 13 — the first day of Storage Networking World.
Around here, the scuttlebutt has been strong that HDS is prepping a new AMS (Adaptable Modular Storage) midrange array. The high-end USP has already gotten a couple of recent refreshes, including a mini-version, as well as a software update; it would make sense for HDS’s midrange arrays to be up for a revamp next.