A new survey of some 200 IT executives across a dozen vertical markets by IT research firm Computer Economics found that 46% of respondents plan to reduce headcount this year, while 27% plan to increase headcount.
The report says healthcare and energy are faring better than other industries, with 60% of healthcare respondents and roughly half of energy and utility organizations reporting staffing increases. Retail, manufacturing and insurance will see the biggest declines, according to the report.
While capital purchases seem to be the most sensitive area for organizations with slashed IT budgets, operational expenditures are a murkier area. The question of staffing and how different organizations are addressing storage efficiency – through technology or operational improvements – seems to come down to organizational philosophy. I’ve talked to users during this economic downturn who say that their IT spending is going up because IT projects are being implemented to automate processes or cut down on spending elsewhere.
The Computer Economics report identifies finance as one industry where this phenomenon is taking place. “Certain sectors, however, are showing positive growth in their 2009/2010 IT operational budgets. These sectors include banking and finance at 4.9%, healthcare providers at 4.7%, professional and technical service firms at 4.0%, and utilities and energy at 1.3%.” These operational budget increases seem to run counter to some vendor marketing in the down economy encouraging users to trade some capital costs for a reduction in operational costs through automation.
Some vendors, like EMC Corp. have also been predicting stabilization in the economy and IT organizations by the end of this year, but the survey results show “the worst may not be over,” according Computer Economics’ press release. “Many IT executives expect further budget reductions in the future. About 49% reported that they expect to spend less than the amount allocated in their 2009/2010 IT spending plans compared to only 9% who anticipate being able to increase their IT budgets.”
Though it’s an interesting set of data points within the ongoing discussions of the economy and storage efficiency, I would also point out that with a sample size of 200 administrators, it’s not necessarily a definitive report. I’m hoping more research like this is being done which can be compared and contrasted with these results.
If 3PAR’s results are an indication, storage spending failed to show any signs of a rebound last quarter.
3Par Tuesday afternoon said its revenue for last quarter was below its previous forecast, and down from the previous quarter. The storage systems vendor disclosed that it expects to report revenue in the range of $44.2 million to $44.5 million, compared to its previous guidance of $48 million to $50 million. Its revised forecast is around an 8% to 9% drop from the previous quarter and a 3% to 4% increase from last year. 3PAR also expects to report a net loss for the quarter.
3PAR reported that “sluggishness” in spending grew worse later in the quarter, which suggests budgets aren’t loosening up yet.
“The weakness was more widespread than what we saw last quarter when it was mostly Internet companies. It was more broad-based this quarter,” 3PAR CEO Dave Scott said in a conference call with analysts. “There are clear signs of budget restraints that remain in place.”
Along with tight budgets, Scott blamed the poor results on delays of customer installations of large systems previously ordered (3PAR recognizes revenue when systems are installed instead of upon taking orders). He said there was some “pricing pressure” (discounts) from competitors but said 3PAR was not losing business to rivals. He said 3PAR ran into EMC’s new V-Max system “far less than we expected to” and told an analyst on the call that talk of the New York Stock Exchange replacing 3PAR with Compellent is not true.
“The New York Stock Exchange remains a good customer, and I am unaware of the replacement of any business at New York Stock Exchange by Compellent,” he said.
Scott added, “We are clearly disappointed by our execution this quarter, and we have every intention of improving our performance in the future.”
Not much in the immediate future, though. 3PAR also lowered its guidance for the current quarter too, dropping its revenue estimate to $43 million to $47 million – below financial analysts’ $50.8 million consensus estimate.
We’ll get a better idea of whether 3PAR’s results last quarter were typical of the industry over the next few months when larger vendors report their earnings.
Call it a Lutheran Reformation for the 21st century. This time, instead of 95 theses nailed to a church door to challenge the Catholic Church, EMC customer and blogger Martin Glassborow posted one thesis on his blog to challenge EMC on the cost of virtual provisioning, also known as thin provisioning.
Even as storage vendors have been touting the cost savings of thin provisioning, it has cost customers extra to deploy the feature. Wrote Glassborow:
HDS and EMC are both extremely guilty in this regard, both Virtual Provisioning and Dynamic Provisioning cost me extra as an end-user to license. But this is the technology upon which all future block-based storage arrays will be built. If you guys want to improve the TCO and show that you are serious about reducing the complexity to manage your arrays, you will license for free. You will encourage the end-user to break free from the shackles of complexity and you will improve the image of Tier-1 storage in the enterprise.
(HDS might have some quibble with this – another blogger, storage consultant Chris M. Evans, points out that HDS’ Switch It On promotion offers free UVM, Dynamic Provisioning (first 10TB only) and Tiered Storage Manager on existing USP-V deployments. Evans also notes HDS’s promo is for existing as well as new deployments; EMC told me today existing Symm deployments will also be eligible, but there appears to be some confusion about that.)
Glassborow’s wish was granted. In response, EMC blogger Barry Burke, also chief strategy officer for Symmetrix, wrote:
In his post, Martin insists that the current pricing strategies for thin provisioning from both HDS and EMC are a disincentive to the adoption of the otherwise compelling feature that makes enterprise arrays easier and more cost-effective to manage and deploy.
These very conversations have been going on within the walls of EMC, and it has been decided that Virtual Provisioning will in fact be included at no charge and with no capacity limitations for all Symmetrix V-Max and DMX 4 orders beginning this quarter. As a result, all Symmetrix V-Max and DMX 4 customers will be able to leverage the speed and ease of storage provisioning, improved capacity utilization and the inherent benefits of wide striping afforded by Virtual Provisioning, all at no extra charge.
We’ll see if others follow suit.
We shall, and if it happens soon, call me cynical, but I will wonder about the timing of this decision on EMC’s part. As Burke notes, this isn’t the first time Glassborow has come knocking with his pricing protest (though I think he deserves credit for his good points and persistence). Should we be anticipating another vendor to be heard from when it comes to free thin provisioning?
And what about Clariion? EMC added thin provisioning to the CX4 last year, but the free thin provisioning is only available for Symmetrix so far.
NetApp came up short to EMC in its bid for Data Domain, but people in the storage industry expect there will still be an acquisition in NetApp’s future. In the swirl of speculation, NetApp is considered both a prime target as well as a buyer on the prowl.
As the Data Domain situation shows, NetApp isn’t sitting around waiting to be bought. And there are more companies that NetApp can buy than larger companies that would take on an acquisition the size of NetApp. So it’s likely that NetApp will make its next move as a buyer, and it has several options if it wants to replace Data Domain with another data protection/deduplication supplier. There are even a few companies who can give NetApp what Data Domain can’t – namely, backup software or global dedupe.
Here’s a list of candidates we think NetApp might pursue, in order of likelihood, with the advantages and disadvantages of each:
Pros: Gives NetApp an instant storage software business in addition to deduplication. Its technology is well respected, with little if any overlap with current NetApp products.
Cons: CommVault’s market share is tiny compared to industry leaders such as Symantec and EMC, especially in large enterprise accounts. More than 10% of CommVault’s revenue comes from OEM deals with Dell and Hitachi Data Systems, who may give CommVault the boot if it goes to their storage rival Netapp.
Pros: Would bring NetApp solid VTL and continuous data protection (CDP) software as well as second-generation deduplication, including global dedupe.
Cons: FalconStor’s dedupe reputation took a hit when its VTL partners EMC and IBM went in other directions for dedupe. FalconStor gets at least 20% of its revenue from EMC VTLs, and a big chunk of its business comes from iSCSI — which probably isn’t of much interest to NetApp because it offers iSCSI on its current storage platform.
Pros: solid dedupe IP and patents acquired from Rocksoft (through ADIC), and a dedupe-based VTL platform that Quantum has improved after a rocky start. EMC has been pushing Quantum on the market as its OEM dedupe partner for nearly a year.
Cons: comes with baggage – a lot of debt from its own acquisitions, and a lot of tape. For all its talk about dedupe, Quantum still gets most of its revenue from tape. Quantum can also be seen as an EMC castoff in the wake of its Data Domain buy.
Pros: global dedupe and a VTL platform that can help NetApp fulfill its goal of becoming more of an enterprise play.
Cons: NetApp might want more of an established company. Also, Hewlett-Packard OEMs Sepaton and could outbid NetApp to prevent Sepaton from getting away.
Pros: startup has built a solid business in the midrange, primarily as a lower-priced option to Data Domain.
Cons: Can be seen as Data Domain-light, and would likely require much investment to become an enterprise play.
Compared to other offerings currently on the market, the system comes in at a higher capacity (the next-biggest all-Flash system is Texas Memory Systems’ 5 TB RAM-SAN 620) and cheaper (TMS’s system costs $220,000). However, there’s one catch: WhipTail uses multi-level cell (MLC) drives instead of single-level cell (SLC) drives.
Most enterprise solid state drive offerings are based on SLC drives, which have one physical layer on which data is recorded. MLC drives are cheaper, more common in consumer devices, and can hold more data. However, their reliability is generally considered lower than SLC drives because of the complexity of writes to multiple physical layers, cheaper raw materials in some cases, and fewer lifetime write cycles supported.
This is where WhipTail’s software IP comes in, according to CEO Edward Rebholz. The software buffers writes to the system using DRAM and splits the writes to NAND capacity into 2-bit blocks to match the size of each cell on the back-end MLC SSDs, which cuts down on wear to the Flash drives and so prolongs their life cycle.
WhipTail also works with MLC drive suppliers using an “extensive quality control process internally”, Rebholz said. According to the CEO, MLC has also gotten much of its unreliable rep from previous generations of drives made up of 3-bit and 4-bit cells. “Two-bit drives are much better,” Rebholz says. All of this adds up to internal WhipTail estimates that it can make an MLC drive last for about seven years with a full overwrite per day. “On average companies overwrite about 25% every day,” Rebholz pointed out.
With a company so new, that lifespan for MLC drives obviously hasn’t been proven out in the market yet. Storage admins have expressed a desire to see larger SSDs, although that usually means capacity and cost per gigabyte of individual drives rather than pools. They might be intrigued by a lower cost per gigabyte (WhipTail also sells 1.5 TB and 3 TB systems for around $46,000 and $75,600 respectively), although capital expenditures on new technologies are among the most difficult projects to get funded in the current economy.
Another alternative is the single mode level cell (SMLC) drives that Fusion-io launched this week.
For some applications, SSD can be a more efficient alternative both in terms of capex and energy costs to short-stroking conventional hard disk drives. Users going this route may find the price points WhipTail’s offering in a recession more attractive than SLC-based products, but there’s still the matter of trusting the software to provide reliability. I will be curious which direction users will choose to go.
It turns out Broadcom can take a hint after all.
After Emulex repeatedly spurned its acquisition offers, Broadcom today threw in the towel and said it will walk away. However, the Ethernet-chip maker may not be giving up on buying a storage company altogether.
“Although we were unable to negotiate an expeditious and friendly transaction at a price that makes sense to us given the expectations set by the Emulex board, there are other value-creating alternatives that we will now turn our attention to as we position Broadcom to capitalize on the emerging opportunities in the converged enterprise networking markets,” Broadcom CEO Scott McGregor said in a news release.
Broadcom’s decision to walk came after Emulex’s board today rejected its latest offer to buy the HBA company, claiming the bid of $11 per share or about $912 million is still too low.
“We unanimously believe Emulex will deliver significantly more value than Broadcom’s revised offer through the company’s rapidly developing converged networking business and solid execution in our host server and embedded storage markets,” Emulex chairman Paul Folino said in a statement.
Folino did add, “we would of course give full consideration to a bona fide offer from any party that reflects the full value of the company.”
Broadcom had been chasing Emulex since December, and made its first formal offer of $9.25 per share or $764 million in April. Emulex management has accused Broadcom of trying to take advantage of its depressed stock price during bad economic times. Emulex shares closed at $9.70 Wednesday, but opened at $8.44 today and traded at $8.90 by mid-afternoon after the deal fell through.
What are Broadcom’s alternatives? Emulex rival QLogic would be even more expensive to buy, so that’s probably not an option. Broadcom can try to build its own Fibre Channel over Ethernet (FCoE) products, or it could pursue LSI’s dormant FC HBA technology.
Emulex also said today it expects to report revenues of approximately $78 million to $79 million for last quarter, in the high end of its forecast of $73 million to $80 million. CEO Jim McCluney added that Emulex recently scored two OEM design wins for its LightPulse HBAs and two design wins for its 10 Gbps Ethernet OneConnect converged network adapters – all with tier 1 vendors. Emulex did not name its partners on those deals.
Carbonite made a big splash in the consumer space this week with the announcement that Sun Microsystems Inc. (soon to become part of Oracle Corp.) will offer a free 30-day trial of its online backup service to Sun customers who upgrade to the latest version of Java or download it for the first time.
Java’s about the most ubiquitous Web interface in the consumer world, so it’s a pretty major coup for Carbonite in its quest to compete with much bigger companies in online backup like Symantec and EMC Mozy. Carbonite’s press release puts Java’s reach at 800 million personal computers.
It’s unclear what proportion of that number is represented by Sun’s direct Java share, since Java is licensed by a number of third-party companies who develop their own custom code. But since Java prompts users for updates automatically, without them seeking out the service, it seems like it should be a pretty effective tool for putting Carbonite in front of users, whether or not they actually take the offer. (One aside here as a PC user myself – the constant “Java update available” reminders are annoying enough. If I have to click through multiple advertisements on my way to installing them, I can see getting very annoyed very quickly…so, of course, it all depends on the consumer response).
“It’s an interesting distribution model for Carbonite,” said Forrester Research analyst Stephanie Balaouras. “It’s not clear how it benefits Sun technically, I’m sure there’s a monetary benefit.”
But this is where things get really interesting – in the course of my conversation with Balaouras I ran across a post on Jonathan Schwartz’s blog discussing a new plan to offer an “app store” in association with Java (if that name sounds familiar, it’s because of Apple’s already-popular App Store service for the iPhone and iPod Touch).
According to Schwartz:
…not all Java runtimes are the same. For most devices, from RIM’s Blackberry to Sony’s Blu-Ray DVD players, original equipment manufacturers (known as “OEM’s”) license core Java technology and brand from Sun, and build their own Java runtime. Although we’re moving to help OEM’s with more pre-built technology, the only runtimes currently that come direct from Sun are those running on Windows PC’s.
And oddly enough, that’s made the Windows Java runtime our most profitable Java platform…a few years ago, we called our friends at one of the world’s largest search companies (you can guess who), to talk about helping them with software distribution – because of Java’s ubiquity, we had a greater capacity than almost anyone to distribute software to the Windows installed base. We signed a contract through which we’d make their toolbar optionally available to our audience via the Java update mechanism. They paid us a much appreciated fee, which increased dramatically when we renegotiated the contract a year later.
The post, which is about two months old and written in anticipation of the JavaOne conference in June, goes on to announce a new business model being pursued by Sun:
The revenues to Sun were also getting big enough for us to think about building a more formal business around Java’s distribution power – to make it available to the entire Java community, not simply one or two search companies on yearly contracts.
And that’s what Project Vector is designed to deliver – Vector is a network service to connect companies of all sizes and types to the roughly one billion Java users all over the world. Vector (which we’ll likely rename the Java Store), has the potential to deliver the world’s largest audience to developers and businesses leveraging Java and JavaFX.
“Everyone.” Schwartz points out, “craves access to consumers.” Particularly in the storage and storage software-as-a-service (SaaS) markets, where consumers are the focus of growth.
Control over Java has been considered the primary focus of Oracle’s intention to acquire Sun. It’s hard to imagine Sun doing any deal right now that didn’t meet with Oracle’s approval, but stranger things have happened… “This seems opportunistic, not a strategic alignment with one online backup vendor over another,” pointed out Balaouras. “It’s also a consumer, SMB play. I’m not sure how much Oracle will care at the moment.”
But if Carbonite can be distributed to consumers through Java, so could virtually any other online backup and storage service. And Schwartz’s post about Project Vector and the partnerships with search engines show Sun is willing to acquiesce to the highest bidder:
The year following [the initial search engine toolbar deal], the revenue increased dramatically again – when an aspiring search company (again, you can figure out who) outbid our first partner to place their toolbar in front of Java users (this time, limited to the US only). Toolbars, it turns out, are a significant driver of search traffic – and the billions of Java runtimes in the market were a clear means of driving value and opportunity.
It will be interesting to see if Carbonite’s competitors make any counter-moves. It will also be interesting to see how significant a channel to market Sun’s Java becomes for cloud storage vendors – could Sun have a last laugh in storage after all?
Fusion-io has taken a step towards bridging the gap between expensive single-level cell (SLC) and cheaper but slower and less reliable multi-level cell (MLC) NAND Flash.
The startup calls the new solid state drive (SSD) technology single mode level cell (SMLC) and expects to be shipping products in its ioDrive and ioDrive Duo PCI Express product lines this quarter.
Fusion-io says SMLC “combines a cost-effective MLC-based solid-state solution with the endurance and performance of SLC,” but it’s really a third option that falls between SLC and MLC in price and performance.
Fusion-io hasn’t released performance numbers, but CTO David Flynn says the SMLC drives close the gap in write speeds and endurance cycles between MLC and SLC. SMLC drives store two bits per cell and come in capacities of 160 GB and 320 GB just like MLC drives – although the SMLC drives require greater overprovisioning to reach those capacities. Generally, SLC drives write about 20% to 30% faster than MLC drives and have about 10 times the write cycles. For the most part, MLC’s shortcomings have kept it out of enterprise SSD products while SLC’s price still scares off a lot of people.
“(SMLC) is very close to SLC,” Flynn said. “I wouldn’t say it’s exactly SLC, but it’s sufficiently close for most uses cases.”
Flynn says SMLC drives will roughly split the difference in cost between its performance SLC ($30 per GB) and capacity MLC ($15 per GB) drives.
Fusion-io already ships enterprise MLC drives that Hewlett-Packard sells as the HP StorageWorks 320GB IO Accelerator.
“This (SMLC) is subtly different,” Flynn says. “Now we can get endurance and performance characteristics of SLC.”
The difference is in the way the controller manages the NAND Flash, he says. “We don’t need special MLC Flash, that would defeat the purpose,” Flynn said. “The purpose is not to have special requirements.”
Dell also sells Fusion-io cards, and IBM has released test results and is committed to selling Fusion-io SSDs down the road.
None of Fusion-io’s partners have publicly signed on to the SMLC cards yet, but Enterprise Strategy Group analyst Mark Peters says SMLC will likely become a third category for NAND Flash alongside SLC and MLC, at least until NAND is replaced by better technology.
“More people will follow, because they have to,” Peters says. “It’s logical. Every piece of research we’ve done says the No. 1 reason people aren’t adopting solid state is price, and this is a move to get the price down.”
Flynn agrees that Fusion-io won’t be the only vendor with SMLC, even if others call it something different.
“We’re first, but we don’t think we’ll be the last,” he said. “It’s too compelling.”
Because the Snap Server acquisition prompted questions about its ability to protect application data, Overland Storage Inc. today introduced a new Business Continuity Appliance for server and application failover based in part on a partnership with InMage Systems Inc.
Overland acquired the Snap Server product line from Adaptec a year ago, and its move into the internal storage market with the NAS boxes’ direct-attached disks prompted customers to ask more frequently about server, operating system and application availability offerings from Overland.
“We saw deals going away from us,” Overland senior product director Kevin Wise said candidly. “BCA strengthens that part of our data protection story.”
The BCA is available in two form factors: the BCA100 and the BCA200. The BCA100 is a 1U pizza box and the BCA200 is a 2U chassis. The BCA100 contains enough software licenses to support up to five application servers, the BCA200 comes with support for up to 10 and can expand beyond that with additional license keys.
“The REO BCA is designed and priced for SMBs—starting at less than $24,000,” noted Enterprise Strategy Group (ESG) analyst Lauren Whitehouse in an email to Storage Soup. “The all-in-one appliance enables application-aware failover/failback to deliver near-zero recovery objectives … now SMBs have a cost-efficient alternative to tape-based backup for local operational recovery and remote disaster recovery.”
The boxes can protect application data, but they don’t perform bare-metal restores. That means customers need operating system licenses available at a secondary DR location to completely rebuild a server. Customers can also purchase application agents to support Microsoft Exchange, SQL and Windows file systems. Plans call for adding support for Oracle and Microsoft SharePoint down the road.
Wise was cagey when I asked for a complete list of the partners Overland is working with for this product, saying multiple pieces of software had been integrated into the box. Vice president of product marketing Ravi Pendekanti confirmed InMage software is at least one of the pieces of the software puzzle, contributing continuous data protection (CDP) with replication. Overland reps would not name any other partners.
“The real value of this product is in the integration and support,” Wise said.
Sure, but customers want to know what’s being integrated, I pressed. No dice. Long story short – if you’re evaluating this product, make sure to ask for all the details on what’s behind the curtain.