With Oracle showing a lot less love for open source storage software than Sun did, high performance computing (HPC) shops are nervous about the future of Lustre and HPC hardware vendors are taking steps to pick up the slack.
Last year the Illumos project sprung up as the governing body for OpenSolaris to help spur development of the source code used for ZFS. Now the Lustre community is rallying around the storage file system used in most large HPC implementations as Oracle shows no signs of supporting it.
Since late last year, Xyratex acquired ClusterStor for its Lustre expertise, start-up Whamcloud started aggressively hiring Lustre developers and partnering with hardware vendors on support deals, and Cray, DataDirect Networks (DDN), Lawrence Livermore and Oak Ridge National labs launched OpenSFS.org to develop future releases of Lustre.
DDN last week launched a promotional offer to provide Lustre customers a third year of support for free if they purchase a two-year DDN file system support contract. The support comes through DDN’s alliance with Whamcloud.
“There is a general uneasiness in the industry, and people are looking for somebody to step up,” DDN marketing VP Jeff Denworth said. “There’s been a gross defection of talent from Oracle around the Lustre file system.”
DataDirect Networks customer Stephen Simms, manager of the Data Capacitor project at Indiana University, said it will take initiatives like those undertaken by DDN, Whamcloud, and OpenSFS to save Lustre.
“Without people to develop Lustre, then Lustre is going to die,” he said. “National laboratories have a strong investment in Lustre, and they will do their best to keep it alive, but without the pool of talent that exists in for-profit companies, where are you going to be? You’re going to be an organization that desperately needs a file system with only a handful of developers.”
Simms said Lustre is a crucial piece of IU’s Data Capacitor, a high speed, high bandwidth storage system that serves IU campuses and other scientific research sites on the TeraGrid network. The IU team modified Lustre file system code to provide the automatic mapping of user IDs (UIDs) across TeraGrid sites.
“We wouldn’t be able to develop this system for mapping UID space if the file system were closed,” he said. “The fact that it’s open has been a big deal for us. It’s important to have the expertise someplace. It’s a concern that there are so few Lustre developers, and they’ve left Oracle and where have they gone? Some have gone to Xyratex and some to WhamCloud, and who knows where others have gone? It’s important to keep momentum.”
If you want to buy a NetApp FAS3200 storage system, you may have to wait a while – especially if you want it with Flash Cache.
NetApp executives Wednesday said the vendor has received about four times as many orders as expected for the systems that launched last November, and sold out of them because of supply constraints. The failure to fill orders cost NetApp about $10 million to $15 million in sales last quarter according to CEO Tom Georgens, causing the vendor to miss Wall Street analysts’ expectations with $1.268 billion of revenue.
Georgens said orders of Flash Cache I/O modules were especially greater than expected, and a shortage of those components limited sales.
Georgens said “we have not resolved these problems yet,” and forecasted lower sales revenue for this quarter than expected as a result. Georgens said the time customers have to wait for FAS3200 systems varies, and would not confirm if the lead time was as long as six weeks as one analyst on the NetApp conference call suggested.
“We’re working like mad to close this gap,” Georgens said. “It’s disappointing to be having this conversation. I can’t tell you with full confidence that we’re going to clear this up. I can tell you with full confidence that we’re working on this night and day. … Overall, in terms of constraints, it’s primarily in the I/O expansion module, certain semiconductor components along the way.”
Georgens also said NetApp is unlikely to follow competitors into large acquisitions. He said NetApp would pursue acquisitions that bring it into new markets or add features that would strengthen its current products. Since EMC outbid NetApp for Data Domain in mid-2009, NetApp has made two smaller acquisitions, picking up object storage vendor Bycast and storage management software vendor Akorri.
There has been speculation that NetApp would acquire a data analytics vendor such as ParAccel or Aster Data to counter EMC’s “big data” 2010 acquisition of Greenplum. However, Georgens said it is unlikely that NetApp will go that route.
“I think there are more attractive adjacent markets for us to pursue,” Georgens said. “Partnerships are available in data analytics. That’s how we’ll go after that, as opposed to being in the analytics business ourselves.”
I had an interesting conversation about IT decision-making while attending an education foundation meeting that was focused on education. The meeting was a chance to speak with CIOs and other senior IT people in a non-business setting. These discussions can be far-ranging and unguarded. In this non-sales, non-consultative environment, the discussion came around to the challenges each executive faced and the risks they had in their positions.
The challenges included personnel issues as well as the products and solutions they had deployed in their IT operations. While it was interesting to hear about what systems (my focus was on storage but they talked about complete solutions that encompassed hardware and software) they had recently purchased, the more interesting discussions were about why they had made these purchases.
One theme came through frequently – many felt they had been stampeded into making a decision and purchase. They felt stampeded in the sense that they weren’t given time to do an adequate (in their opinions) evaluation to make an informed decision.
The reasons for not being given enough time included their receiving pressure from above saying the company required immediate action, and business would be lost if the solution was not delivered immediately. Organizationally, a group (internal customer for example) would look elsewhere if it failed to get an immediate solution.
“Look elsewhere” took the form of outsourcing, bringing their own systems to meet the needs, or threats of withdrawing budgetary support.
One person in the group said he was being forced to make “haphazard decisions” rather than informed ones. And these haphazard decisions included no thought of how this would affect the overall strategy or IT operations. The use of the word haphazard struck me as a somewhat alarming term.
From my standpoint, this looked like a great opportunity to do a case study of what the lack of a strategy or formal process can mean to IT. Asking what the results were for these stampeded or haphazard decisions did not yield a clear answer, however. It became obvious that the real effect may not become apparent for a few years. The initial responses were more emotional (and negative) than actual quantified results.
What I took away from this discussion was that decisions were still being made that were not informed and not really strategic. Whether they were bad decisions or not would take some time to find out. The people making the haphazard decisions were aware they were not proceeding correctly, and were embarrassed by it. But they felt they had no real choice. They had to make a decision and needed whatever information they could access quickly given the time they had.
Fortunately, the majority of decisions about purchasing systems and solutions (especially storage systems) do not take this route. But the reality is some decisions are being made haphazardly, and that can present problems.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
While working on a project for the IT organization of a large company recently, I began questioning the information it included in its Return On Investment (ROI) calculations. The organization used ROI as a measurement to buy storage as part of a storage efficiency project, but I found problems with the measurement based on what was included in the calculations.
Before explaining the difficulties I had with the information they used, let’s first look at what is typically included in ROI calculations. The definition I give for ROI is: the assessment of return on investment money for savings or gains from project implementation. For a storage project, the ROI calculations include:
ROI is usually expressed as a percentage of gain with a payback over a given period of time. I can still hear a salesman saying, “It pays for itself in just 11 months.” That’s the way ROI is used as an economic metric for decision making.
But, there are two types of gains usually included in ROI calculations. They are called hard benefits and soft benefits. The hard and soft benefits are usually defined like this:
Hard benefits include capital savings or deferred capital investments, operational savings, and productivity improvements.
Soft benefits consists of opportunity costs such as maximizing revenue potential by making enterprise information and applications more available, and cost avoidance through minimizing down time.
It is easy to add items — taking credit for potential economic benefits — or make your numbers overly optimistic in soft benefits. I’ve seen this become a game of liar’s poker in a competitive situation. Because of that, I recommend not including soft benefits while making a decision. Many times the soft benefits are being listed by a party with a lot to gain. It is best to leave those soft benefits as potential opportunities – preferably just listed but not quantified.
Soft gains can still be used to help validate the accuracy of the project planning and for future corrective action. That’s why as part of the closed-loop process we cover in our Evaluator Group classes, we recommend including soft gains in the actual returns at the completion of the ROI time period.
The important thing to take from this is that using ROI for economic decision-making needs to have a more discerning review of the inputs, and the review process must include a validation of the underlying assumptions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Although it may have seemed like it, EMC didn’t actually upgrade its entire product portfolio during its much-hyped product launch last month. During an investors’ event webcast Tuesday, EMC executives said they have more product upgrades coming at EMC World in May, including primary block-level data deduplication.
EMC president Pat Gelsinger called primary block dedupe the missing piece in one key part of EMC’s strategy — aligning storage with VMware.
“There is one feature NetApp has that we don’t have – primary block deduplication,” EMC president Pat Gelsinger said. “We’ll have that in the second half.”
Gelsinger said block primary dedupe is just one piece of the data reduction picture, but the rollout will “close that last piece of competitive gap that we don’t have.”
EMC is the market leader in backup deduplication with its Data Domain and Avamar products and offers compression for primary data, but NetApp beat it to the punch with primary dedupe across its FAS storage arrays. Other vendors are also working on delivering dedupe for primary data. Dell acquired the technology when it bought Ocarina last year but hasn’t delivered a timeframe for integrating it with its storage systems. Xiotech and BlueArc have signed OEM agreements with Permabit to use Permabit’s Albiero primary dedupe software. IBM is going the compression route for primary data with its Storwize acquisition. EMC’s primary dedupe announcement may pressure its competitors to move forward with their products.
Gelsinger said other products coming this year include VPlex with asynchronous replication for distances and a Symmetrix VMAX refresh, which he said will be “expanding down and across emerging markets.”
Whatever products EMC delivers this year, you can expect the vendor to put a cloud twist on them. CEO Joe Tucci made the cloud the big theme of his presentation Tuesday.
“We did not invent the cloud, but we recognized the opportunity of the cloud early,” he said. “We believe this will be the next big game-changing trend in the IT industry. It’s a wave of disruption, and I don’t believe there will be an IT segment that will be immune to this wave.”
Flash memory array startup Violin Memory today said it received $35 million in funding, naming NAND flash maker Toshiba America and networking vendor Juniper Networks as investors. There were other investors not named, including one that sounds a lot like Hewlett-Packard.
Violin’s press release said Toshiba and Juniper were investors, “along with other corporate partners, crossover investment funds, high net worth industry leaders and private equity general partners.” In an interview with StorageSoup, Violin Memory CEO Don Basile said the other investors include a “large storage and systems company that doesn’t want to be named.”
Basile wouldn’t confirm or deny if that partner was HP, but said the investor was a company that Violin has worked with. HP last June posted blazing fast online transaction processing (OLTP) benchmarks using four Violin Memory Arrays with HP ProLiant Blade Servers, marking the first time HP used another vendor’s memory system in that type of benchmark. There have been whispers as well as published reports in recent weeks that HP and Violin are working on deepenning their relationship.
Basile said the new investors will lead to partnerships that will boost Violin’s distribution. He said two major trends are poised to drive Flash storage implementation. He credited Oracle CEO Larry Ellison with identifying the first trend last year when he said Oracle would use flash to speed up its databases. The second trend Basile sees is for flash to provide a foundation for building private clouds. That’s where Juniper comes in.
“Juniper exists in the highest end of data centers, and paired with a memory array like Violin’s, it can enable private clouds to be built,” Basile said. “We have [flash] in processors, and now we’ll have higher performance networking and I/O tiers to match those processors. This is something that will start to be talked about this year, and in 2012 people will start to roll out cloud services based on these technologies.”
Basile said the funding will allow Violin to double its headcount this year, mostly by expanding its sales and engineering. The company current has just over 100 employees. He also promised new product rollouts in the coming months. Violin’s current products include a 3200 Flash Memory Array, a 3140 Capacity Flash Memory Array, and a vCache NFS caching device built on technology it acquired last year from Gear6.
Violin and Toshiba announced a strategic partnership last April, disclosing that Violin is using Toshiba’s NAND flash chips.
Solid state storage startup WhipTail Technologies is looking to move into its next phase of development after hiring a new CEO this week.
Dan Crain, whose 27 years in the IT business includes five years as CTO of Brocade, takes over at WhipTail with the belief that the spinning disk era in enterprise storage is coming to an end. Crain said it’s a matter of time until the right type of solid storage takes over, and he believes WhipTail’s I/O acceleration appliances are on the right track.
“Without a doubt, we’re getting to the end of spinning media and disk,” he said. “It’s been around for 40 years, and the physics are getting more challenging. Solid state as a medium for persistent storage is getting there.”
Crain said he knows just having solid state in a storage system isn’t enough. All of the major storage vendors do that now, and smaller vendors such as Nimbus Data, Texas Memory Systems, Avere Systems, Violin Memory, and Alacritech have all-solid state disk (SSD) systems.
Whiptail’s Data Center XLR8r and Virtual Desktop XLR8r 2u multi-level cell (MLC) appliances use NAND and DRAM, and focus on block storage, particularly applications with high I/O requirements. Crain said he sees all-SSD appliances the way to go rather than hybrid systems that use solid state and spinning disk as separate tiers. He said the market will take care of one SSD hurdle – price – while superior technology can overcome reliability issues.
“A lot of tiered systems exist because the cost of solid state storage media was very, very, very expensive,” Crain said. “The price has fallen substantially now. With Flash-based disk, NAND-memory based SSDs, the more you write to them, the more they wear out. We have patented technologies that help deal with that. Hybrid systems that move things in and out of SSDs cause write wear. We don’t think that’s the way to go.”
Crain said the Data Center appliance is tuned around database functions such as transaction logs and indexes, while the Virtual Desktop model is tuned for virtual desktop performance loads. “We’re not going to be the production data store for databases, we will be the index and transactional log storage device,” he said. “We keep system files, not user data.”
Since it started in 2008, WhipTail has been run by founders Ed Rebholz and James Candelaria. Crain replaces Rebholz as CEO, while Candelaria stays on as CTO. Crain said WhipTail is still in the early stage of product sales with “several dozen” customers, and he expects to rollout product enhancements soon.
“We have a lot of growth to do over the next two or three years and you’ll see us produce a lot of interesting technology,” he said. “Some of it will be very radical.”
When EMC’s Mozy told customers this week it was raising prices on its home cloud backup service, company execs said the hike for most customers would be $1 per month. That would come to $5.99 for 50 GB of online backup.
Well, scores of angry MozyHome customers say 50 GB won’t come close to cutting it for them. The support forum on Mozy’s web site has more than 50 pages of posts from disgruntled customers who claim their costs would rise drastically under the new plan, calculating they would have to pay up to thousands of dollars a year to continue with Mozy. Many customers who have posted on the Mozy site said they have discontinued the service, with others threatening to do so.
Mozy reps said the price increase is necessary because customers are storing more photos and videos online, and backing up multiple computers.
Instead of charging $4.99 for unlimited backup, Mozy’s new entry level pricing is $5.99 for 50 GB on one machine. Mozy suggested most of it customers would find that plan satisfactory. Customers who need more online storage must pay $9.99/month for 125 GB on up to three devices, plus $2/month for each additional device and $2/month for each additional 20 GB of storage.
The new price goes into effect immediately for new customers. Existing customers can keep the unlimited pricing until the end of their current contracts.
Mozy CMO Russ Stockdale said the company hasn’t changed pricing for its MozyPro (business) service, because its customers store mostly business files and not photos and videos.
“On the consumer side, pressure’s been there because of large files,” he said. “We feel like this is something the industry has to come to terms with.”
He said competitors and other cloud backup providers have size limits on files they will back up, or throttle bandwidth to keep prices down. “We’ve concluded those aren’t the right was to do it,” he said. “We provide the same quality of service across all file types and storage allocations.”
Stockdale also said Mozy surveyed customers before making the price increase and got the impression they thought the hike was fair.
Many customers obviously do not think so. Here’s a sampling of comments from the Mozy forum:
“I almost laughed when I got the email about the new plans from Mozy. Anyone that knows enough about computers to back up their files remotely has way more than 50gb worth of stuff worth backing up.
I understand that, in any field, unlimited plans eventually go away. But usually companies do so by replacing them with a capped plan that’s still virtually unlimited for most users. We’ve seen this with cell phone carriers and ISPs capping their unlimited plans at heights that only effect the top 0.01% of users.
But here, going from unlimited storage to 50gb of storage in a time when you can’t even buy a 50gb hard drive anymore because it’s so small? That’s a ballsy move.
Going from unlimited to 50gb would have been a fair move 10 years ago. Now, not so much.”
“Someone else made a great point earlier in this thread that it is sad that my iPod can backup more than 50gb! If the folks at Mozy think 50gb of data is a reasonable “average user” threshold to set their baseline, they must still be living in 2003.”
“I thought Mozy was great, but I have over 600GB backed up on them, that’s over a $1,200 a year they want for that with the new pricing plans. I was debating between cloud storage and building my own server … that decision is a lot easier now. I can build a server at a relative’s house and FTP backup and still save money, or I could just run it from my house with a UPS in my detached garage for safety if I was that paranoid. … For $200, I could get 2tb backup with a dock, that’s a lot faster than Mozy and no constant resource drain.”
“This new pricing structure is ridiculous! I currently have 275 gigs backed up and this will jump my monthly fee from $4.5 to $30!! I can buy a small fireproof safe and a good external for that kind of money. I understand that things have changed since you started, but gradual changes or a better pricing structure might have worked better. Hitting loyal customers with a huge increase is just going to drive them away to other alternatives.
Only 50GB?! Seriously – only casual computer users only have 50GB of files and media, and those type of users aren’t savvy enough to even think to backup their files.”
“… With this new plan my fee would increase from $210 for 2 pc’s for 2 years to $1680. No way Mozy. My need for storage increase with 100-200 GB per year so if I renewed now the next renewal would run up to a staggering $2500! No way is that going to happen.
So my plan is to buy two external 2 TB hd’s and use them for offsite storage at my workplace. I will backup to one and the other will be offsite and once a month I will switch them. And just to clarify my online backup is NOT my only backup it is just a convenient offsite backup.
And for the people that are considering a competitor please remember that if you have more than 200 GB then your upload speed will be reduced to only 100 kbit/s. So for me they are not really an option. It took long enough to complete the initial backup here on mozy with a 2 mbit/s upload line.”
CommVault CEO Bob Hammer pointed to his company’s strong sales results last quarter as a validation of Simpana 9 released in October and a change in the way organizations approach backup today.
CommVault Tuesday reported revenue of $84 million last quarter, up 18% from last year and 11.2% from the previous quarter. The results were a big jump from two quarters ago when CommVault slumped to $66.3 million in revenue when it missed its projections by a wide margin. CommVault also seems to have gained on market leader Symantec, which last week reported its backup and archiving revenue increased five percent over last year.
While Simpana 9 drew a lot of attention for adding source deduplication to the target dedupe in the previous version, it also leans heavily on replication and snapshot technology to create recovery copies of data without moving the data. These technologies were cited by Gartner in its most recent Magic Quadrant for Enterprise Disk-Based Backup/Recovery that placed CommVault at the head of the leaders group.
“The old backup methodology of managing the backup stream is no longer efficient,” Hammer said. “The backup copy today is your persistent long-term archive copy. The industry has shifted into managing data using snaps and replication as the primary way of managing and moving data. We can seamlessly create a long-term non-corrupt point-in-time copy. The backup business is now a more comprehensive data management business.”
Hammer also said CommVault will not follow Symantec’s lead of selling its backup software on branded appliances. CommVault sells Simpana on hardware from partners such as Dell, and Hammer said he will continue with that strategy.
“We don’t want to get into the hardware business, for a lot of reasons,” he said. “At least not on our own. We will provide our software to partners who embed it and sell it on hardware.”
CommVault’s sales spike last quarter came despite a drop in revenue from its Dell OEM deal. Dell accounts for 20%of CommVault revenue, but it decreased 13% year over year and 10% sequentially last quarter. Hammer said that was in part to a drop in sales to federal government, which make up a large part of its sales through Dell.
As for cloud backup, Hammer said CommVault is seeing large deals through managed service providers (MSPs) while enterprises are taking a slow approach. “But whether enterprises are going to deploy to the cloud yet or not, cloud capability is one of the requirements they’re looking for,” he said.
Two topics are of top of mind with IT managers and vendors today: data center transformation and data center optimization. These topics come up all the time in my discussions with CIOs and IT managers in regards to their initiatives, and also with vendors in their strategies to deliver solutions to meet demands.
Unfortunately, some people confuse the meaning of the two initiatives. They are similar, but different. There is a misunderstanding about how they should be applied, and that leads to discussions that start down the wrong track and can take time to rewind and re-focus.
Let me put some perspective on how the terms should be used, and how they will evolve over time. Data center transformation is a big picture vision about changing the premise of how a data center operates. Think of it as clean sheet of paper discussion about IT. The discussion is really about how the services of compute and storage should be delivered to customers in the data center. The customers could be departments, individual users, or even other companies.
Discussions about cloud computing have transitioned into data center transformation discussions for most IT professionals. These cloud discussions include public clouds where compute and information storage is done by a service provider, private clouds where the data center delivers its services as cloud-like offering, and hybrid clouds where some services and information storage are done on-premise within the data and some utilize a public cloud.
For IT professionals, data center transformation requires putting together a new strategy for providing services for today and the future – usually focused on providing IT as a service (ITaaS). For more on data center transformation, see Evaluator Group articles here and here.
Data center optimization is focused on making the data center more efficient. The optimization may refer to deploying and exploiting new technologies and methods. Data center optimization is approached as an overall goal but is broken down into specific areas. Storage efficiency is one of those areas for optimization that has many elements.
Data center optimization should include a rigorous and believable measurement system to for making decisions and demonstrating results. Individual areas generally use ROI methodology for the measurements. Overall, a broad TCO calculation is used to show the effect of multiple initiatives.
For data center transformation and data center optimization discussions, there are differences in time scale, the decision making process, the amount effort involved, and the costs. Transformation is a company-wide directional decision with long-term implications. Consequently, the decision time may be lengthy. Optimization, primarily broken down into individual projects as elements of a larger goal, will have shorter cycle decision processes and be put into effect (at least partially) much sooner. These are important differences to take into consideration.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).