Storage Soup


February 2, 2011  6:34 PM

Mozy customers balk at cloud backup price increase

Dave Raffo Dave Raffo Profile: Dave Raffo

When EMC’s Mozy told customers this week it was raising prices on its home cloud backup service, company execs said the hike for most customers would be $1 per month. That would come to $5.99 for 50 GB of online backup.

Well, scores of angry MozyHome customers say 50 GB won’t come close to cutting it for them. The support forum on Mozy’s web site has more than 50 pages of posts from disgruntled customers who claim their costs would rise drastically under the new plan, calculating they would have to pay up to thousands of dollars a year to continue with Mozy. Many customers who have posted on the Mozy site said they have discontinued the service, with others threatening to do so.

Mozy reps said the price increase is necessary because customers are storing more photos and videos online, and backing up multiple computers.

Instead of charging $4.99 for unlimited backup, Mozy’s new entry level pricing is $5.99 for 50 GB on one machine. Mozy suggested most of it customers would find that plan satisfactory. Customers who need more online storage must pay $9.99/month for 125 GB on up to three devices, plus $2/month for each additional device and $2/month for each additional 20 GB of storage.

The new price goes into effect immediately for new customers. Existing customers can keep the unlimited pricing until the end of their current contracts.

Mozy CMO Russ Stockdale said the company hasn’t changed pricing for its MozyPro (business) service, because its customers store mostly business files and not photos and videos.

“On the consumer side, pressure’s been there because of large files,” he said. “We feel like this is something the industry has to come to terms with.”

He said competitors and other cloud backup providers have size limits on files they will back up, or throttle bandwidth to keep prices down. “We’ve concluded those aren’t the right was to do it,” he said. “We provide the same quality of service across all file types and storage allocations.”

Stockdale also said Mozy surveyed customers before making the price increase and got the impression they thought the hike was fair.

Many customers obviously do not think so. Here’s a sampling of comments from the Mozy forum:

“I almost laughed when I got the email about the new plans from Mozy. Anyone that knows enough about computers to back up their files remotely has way more than 50gb worth of stuff worth backing up.
I understand that, in any field, unlimited plans eventually go away. But usually companies do so by replacing them with a capped plan that’s still virtually unlimited for most users. We’ve seen this with cell phone carriers and ISPs capping their unlimited plans at heights that only effect the top 0.01% of users.

But here, going from unlimited storage to 50gb of storage in a time when you can’t even buy a 50gb hard drive anymore because it’s so small? That’s a ballsy move.

Going from unlimited to 50gb would have been a fair move 10 years ago. Now, not so much.”

“Someone else made a great point earlier in this thread that it is sad that my iPod can backup more than 50gb! If the folks at Mozy think 50gb of data is a reasonable “average user” threshold to set their baseline, they must still be living in 2003.”

“I thought Mozy was great, but I have over 600GB backed up on them, that’s over a $1,200 a year they want for that with the new pricing plans. I was debating between cloud storage and building my own server … that decision is a lot easier now. I can build a server at a relative’s house and FTP backup and still save money, or I could just run it from my house with a UPS in my detached garage for safety if I was that paranoid. … For $200, I could get 2tb backup with a dock, that’s a lot faster than Mozy and no constant resource drain.”

“This new pricing structure is ridiculous! I currently have 275 gigs backed up and this will jump my monthly fee from $4.5 to $30!! I can buy a small fireproof safe and a good external for that kind of money. I understand that things have changed since you started, but gradual changes or a better pricing structure might have worked better. Hitting loyal customers with a huge increase is just going to drive them away to other alternatives.

Only 50GB?! Seriously – only casual computer users only have 50GB of files and media, and those type of users aren’t savvy enough to even think to backup their files.”

“… With this new plan my fee would increase from $210 for 2 pc’s for 2 years to $1680. No way Mozy. My need for storage increase with 100-200 GB per year so if I renewed now the next renewal would run up to a staggering $2500! No way is that going to happen.

So my plan is to buy two external 2 TB hd’s and use them for offsite storage at my workplace. I will backup to one and the other will be offsite and once a month I will switch them. And just to clarify my online backup is NOT my only backup it is just a convenient offsite backup.

And for the people that are considering a competitor please remember that if you have more than 200 GB then your upload speed will be reduced to only 100 kbit/s. So for me they are not really an option. It took long enough to complete the initial backup here on mozy with a 2 mbit/s upload line.”

February 2, 2011  3:58 PM

CommVault CEO: Industry has shifted to snap-based backups

Dave Raffo Dave Raffo Profile: Dave Raffo

CommVault CEO Bob Hammer pointed to his company’s strong sales results last quarter as a validation of Simpana 9 released in October and a change in the way organizations approach backup today.

CommVault Tuesday reported revenue of $84 million last quarter, up 18% from last year and 11.2% from the previous quarter. The results were a big jump from two quarters ago when CommVault slumped to $66.3 million in revenue when it missed its projections by a wide margin. CommVault also seems to have gained on market leader Symantec, which last week reported its backup and archiving revenue increased five percent over last year.

While Simpana 9 drew a lot of attention for adding source deduplication to the target dedupe in the previous version, it also leans heavily on replication and snapshot technology to create recovery copies of data without moving the data. These technologies were cited by Gartner in its most recent Magic Quadrant for Enterprise Disk-Based Backup/Recovery that placed CommVault at the head of the leaders group.

“The old backup methodology of managing the backup stream is no longer efficient,” Hammer said. “The backup copy today is your persistent long-term archive copy. The industry has shifted into managing data using snaps and replication as the primary way of managing and moving data. We can seamlessly create a long-term non-corrupt point-in-time copy. The backup business is now a more comprehensive data management business.”

Hammer also said CommVault will not follow Symantec’s lead of selling its backup software on branded appliances. CommVault sells Simpana on hardware from partners such as Dell, and Hammer said he will continue with that strategy.

“We don’t want to get into the hardware business, for a lot of reasons,” he said. “At least not on our own. We will provide our software to partners who embed it and sell it on hardware.”

CommVault’s sales spike last quarter came despite a drop in revenue from its Dell OEM deal. Dell accounts for 20%of CommVault revenue, but it decreased 13% year over year and 10% sequentially last quarter. Hammer said that was in part to a drop in sales to federal government, which make up a large part of its sales through Dell.

As for cloud backup, Hammer said CommVault is seeing large deals through managed service providers (MSPs) while enterprises are taking a slow approach. “But whether enterprises are going to deploy to the cloud yet or not, cloud capability is one of the requirements they’re looking for,” he said.


February 1, 2011  1:16 PM

Data center transformation and optimization aren’t the same

Randy Kerns Randy Kerns Profile: Randy Kerns

Two topics are of top of mind with IT managers and vendors today: data center transformation and data center optimization. These topics come up all the time in my discussions with CIOs and IT managers in regards to their initiatives, and also with vendors in their strategies to deliver solutions to meet demands.

Unfortunately, some people confuse the meaning of the two initiatives. They are similar, but different. There is a misunderstanding about how they should be applied, and that leads to discussions that start down the wrong track and can take time to rewind and re-focus.

Let me put some perspective on how the terms should be used, and how they will evolve over time. Data center transformation is a big picture vision about changing the premise of how a data center operates. Think of it as clean sheet of paper discussion about IT. The discussion is really about how the services of compute and storage should be delivered to customers in the data center. The customers could be departments, individual users, or even other companies.

Discussions about cloud computing have transitioned into data center transformation discussions for most IT professionals. These cloud discussions include public clouds where compute and information storage is done by a service provider, private clouds where the data center delivers its services as cloud-like offering, and hybrid clouds where some services and information storage are done on-premise within the data and some utilize a public cloud.

For IT professionals, data center transformation requires putting together a new strategy for providing services for today and the future – usually focused on providing IT as a service (ITaaS). For more on data center transformation, see Evaluator Group articles here and here.

Data center optimization is focused on making the data center more efficient. The optimization may refer to deploying and exploiting new technologies and methods. Data center optimization is approached as an overall goal but is broken down into specific areas. Storage efficiency is one of those areas for optimization that has many elements.

Data center optimization should include a rigorous and believable measurement system to for making decisions and demonstrating results. Individual areas generally use ROI methodology for the measurements. Overall, a broad TCO calculation is used to show the effect of multiple initiatives.

For data center transformation and data center optimization discussions, there are differences in time scale, the decision making process, the amount effort involved, and the costs. Transformation is a company-wide directional decision with long-term implications. Consequently, the decision time may be lengthy. Optimization, primarily broken down into individual projects as elements of a larger goal, will have shorter cycle decision processes and be put into effect (at least partially) much sooner. These are important differences to take into consideration.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 27, 2011  1:31 AM

Quantum upgrades dedupe software, while sales miss mark

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum today said it is doubling the speed of its data deduplication software, at the same time it admitted sales of large disk backup devices with dedupe last quarter were disappointing.

Quantum’s DXi 2.0 software will have the same dedupe ratio as the original DXi version, but Quantum claims it has the fastest throughput of any dedupe software on the market. The vendor also eliminated the option to dedupe post-process and failed to add global deduplication capabilities.

The vendor made its 2.0 release at the same time as it reported earnings, falling well short of its revenue guidance mainly because of disappointing enterprise deduplication system sales.

Quantum said a DXi4500 SMB appliance running DXi 2.0 software will deduplicate at up to 1.4 TB/hour for NAS and 1.7 TB/hour for the Symantec OpenStorage (OST) protocol, and a midrange DXi6500 with version 2.0 will dedupe at 4.3 TB/hour for NAS and 4.6 TB/hour for OST.

The DXi software upgrade follows enhancements by EMC Data Domain and Sepaton to their deduplicatio products over the last nine days.

The first version of DXi let customers choose between inline and post-process deduplication, but DXi 2.0 only supports inline dedupe. Quantum SVP Janae Stow Lee said with the speedier software plus more powerful processors, inline dedupe is fast enough to negate any advantage of post-process. DXi 2.0 still does not support global dedupe across systems. Software deduplication products support global dedupe, as does Quantum hardware rivals Sepaton, Data Domain and IBM for at least two nodes.

“This is not a clustered system,” Lee said. “There are advantages, but also complexity and cost disadvantages of clustered systems. We have a different strategy for that over time.”

Quantum will begin selling 2.0 software on its SMB DXi4500a and midrange DXi 6500 systems by March, and it will be available on higher end DXi6700 and DXi8500 hardware over the summer. DXi 2.0 will have the same price as the 1.0 software and will be a free upgrade to existing customers.

Quantum today reported revenue of $176 million for last quarter, which fell below its guidance of $185 million to $200 million. Quantum executives said the vendor did have record sales of its midrange DXi systems and StorNext file system software and gained share in tape, but had trouble closing large deals with its new DXi8500.

“We continue to make progress on our growth strategy, but not as much as some people expected,” Quantum CEO Rick Belluzzo said of the revenue miss. “The biggest factor was the enterprise business was substantially down because of deal flow and a transition to the 8500. We had a lot more [DXi] deals, but a lot of small ones.”


January 25, 2011  5:13 PM

Cloud storage gateway startup Cirtas gets funding, new CEO

Dave Raffo Dave Raffo Profile: Dave Raffo

Three multi-billion dollar storage acquisitions over the past two years have made storage a hot target for venture capitalists, especially startups who deal with moving data to the cloud.

Cirtas today unveiled a new CEO as well as $22.5 million in a second round of funding. Cirtas is among several cloud storage gateway vendors who launched over the past year or so, and they have been busy with funding. Nasuni ($15 million) and Panzura ($12 million) closed funding rounds last month and StorSimple ($13 million) received funding in September. But with all of the large storage vendors also looking to the cloud, it’s unlikely that the market can support so many startups.

Cirtas CEO Gary Messiana said it will take more than the ability to move data off to the cloud for one or more of these startups to stand out. He said it is the intelligence in the Cirtas Bluejet Storage Controller that is unique. BlueJet’s controller presents an iSCSI target to servers as if it were an array on a local SAN. It handles encryption, tiering, data reduction and snapshots as well as sending data off to the cloud. Blujet is used for backup data, along with tier 2 and tier 3 primary data. Customers can keep data on the appliance or move it off to cloud service providers such as Amazon, which was a first-round investor in Cirtas.

“We have cache on our box, we have disk storage on our box, and our algorithms and sophisticated intelligence determines whether each file should reside in cache, the disk array, or we should move it off to the cloud,” Messiana said. “All of our customers are using us to put data into the cloud. Not all of [the data] obviously, but the portion that makes sense.”

Messiana won’t say how many customers Cirtas has yet, but he said the new funding will be used to beef up engineering , sales, and marketing for the Bluejet product. He replaces founder Dan Decasper, who remains with Cirtas as CTO. Messiana came from Cirtas investor Bessemer Venture Partners, where he was an entrepreneur in residence. He has also been a CEO at Netli and Diligent Software Systems.

He said he learned a lesson at content delivery company Netli that should help him forge a strategy at Cirtas. “I saw that no large content owner wants to go to a single CDN [content delivery network],” he said. “We believe it will be the same thing for the cloud. We believe large enterprises will want to use multiple providers. Customers don’t want to get locked into a single back end.”

For that reason, he said, Cirtas will work closely with other cloud providers besides Amazon S3. Its other cloud service partners include Iron Mountain, EMC Atmos and AT&T Synaptic Storage as a Service.


January 24, 2011  4:11 PM

Hosts vs. servers in open systems storage

Randy Kerns Randy Kerns Profile: Randy Kerns

While working on a product analysis for the recently launched EMC VNX multiprotocol storage systems, I picked up on something I’ve also seen in other recent vendor presentations.

I’m seeing the word host commonly used to refer to the computing element that storage connects to, either directly or through a SAN. The word host takes me back to the mainframe terminology that has been around for so long.

The mainframe world has an entire lexicon of terms that are different than those used in the open systems world. A while back I made a list that translated the terms between mainframe and open systems for a storage systems class I was teaching. That list became popular and many people asked for a copy (if you would like a copy, send an email to info@evaluatorgroup.com).

In open systems, people used the term servers instead of hosts. Now I’m seeing vendors use the term host in documentation and presentations for open systems. There are two reasons for this:

First, vendors generally define a host as a server running an application. This seems to be the most common sense definition. Second, the word server is really getting a broader meaning today. There are a number of reasons for this, specifically in the storage systems world.

With many storage solutions today, the storage function is really an application running on a standard server. It is a special application, usually with specific hardware requirements. It is often referred to as a server with the storage application. These systems may be characterized as appliances or even a software package that can be deployed on a server. This usage clouds the meaning of the term server, and requires a further explanation to determine which server is being referred to.

The standard client/server model that once defined open systems may no longer easily map to the definitions used where a server is running an application and has storage attached to the server. So when referring to a server running an application and storage that is critical to the operation, the term host can make it clearer. And, I suspect, that’s the main reason we’re seeing the term used that way so often now.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 21, 2011  8:19 PM

Data miner strikes gold with DRAM storage appliance

Dave Raffo Dave Raffo Profile: Dave Raffo

We’ve seen that these are still early days for solid-state and Flash adoption in enterprise storage, and people are still trying to figure out the best way to implement the technology. That leaves the door still open to new approaches.

One of those new approaches is that of Kaminario, which in mid-2010 came out of stealth with a DRAM-based solid state storage appliance that it claims can provide faster access to data in key applications. Kaminario also uses hard drives in its K2 appliances, which consist of blade servers with redundant hard drives, Fibre Channel switches and redundant UPS. Its KOS operating system controls load balancing of data across the DRAM of each Data Node, making the DRAM of the entire system look like a single high-speed disk to the application.

Now that K2 appliances have been on the market awhile, Kaminario has been identifying customers using them for a performance boost. One of them is Digital Trowel, an Israeli based Web data mining company that sifts through Internet records to find relevant data quickly for customers. Digital Trowel CTO Anton Bar said it took more than a week to crawl five billion database records for his customers, find errors and correct them with EMC Clarrion SAN arrays. Since adding a K2 appliance, he said it now takes three days to mine those records.

“The bottom line is, our identity resolution process was shortened by about 50 percent, and that’s very important in our line of business,” Bar said.

Bar said he considered several solid state approaches, including adding SSD drives to his EMC Clariion array, going to an all-flash SSD appliance and using Flash on PCIe cards. He tried SSDs in his array first, but said that didn’t give him the performance increase he needed. K2 appliances start at $50,000, and Bar said that was a bargain compared to other methods.

“In addition to simply shoving flash discs into our Clarion, which didn’t improve the throughput at all and was terribly expensive, we considered also the Texas Memory Systems RamSan products,” he said. “However, they had the same price, half of the storage space and lower speed [than K2] — a clear no-brainer.  We also considered Fusion-io ioDrive Flash cards – but they weren’t fail proof. There was (no redundancy at all.”

The K2 appliance may not win out over other approaches in all cases, but it shows that solid state storage options are still expanding.


January 18, 2011  5:39 PM

Dell to resell EMC VNX, not VNXe

Dave Raffo Dave Raffo Profile: Dave Raffo

There has been a lot of speculation as to whether Dell will offer EMC’s newly launched VNX and VNXe unified storage systems. The EMC-Dell OEM relationship has been on the rocks and unlike previous Clariion launches, Dell was not a part of Tuesday’s EMC event as EMC said it was expanding its channel partner program.

A Dell spokesman said EMC will resell the VNX midrange systems but not the VNXe SMB products. The spokesman first said that Dell will be an EMC channel partner on the VNXe system, but later clarified that VNXe will not be part of the EMC-Dell relationship. The two vendors are still deciding if the VNX platform will be an OEM product sold under Dell’s brand or only sold through a reseller deal.

According to an email from Dell spokesman David Graves: “Dell is selling VNX through our reseller agreement. A Dell-branded OEM version of VNX is still in discussion. Dell and EMC have mutually agreed not to use Dell as a channel for the VNXe product – either as a reseller or a Dell-branded OEM offering.”

Dell currently sells EMC’s Clariion SAN, Celerra unified storage and lower-end Data Domain dedupcliaction backp systems under its brand. The Clariion and Celerra converged into the VNX platform. EMC also launched new Data Domain systems today, but they were enteprise systems and not in the range of those branded by Dell.

Dell’s partnership with EMC took a large hit last year when Dell attempted to acquire EMC rival 3PAR. Hewlett-Packard outbid Dell for 3PAR, but Dell answered last month by saying it would acquire Compellent – another EMC competitor. The VNXe is competitive to Dell’s EqualLogic and PowerVault NAS brands.


January 18, 2011  2:08 PM

EMC beefs up VMAX and Data Domain while launching VNX

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

EMC made its big product launch today, and while the VNX and VNXe systems were no surprises, the vendor did roll out a few other new products that had been kept secret. Those included new Data Domain data deduplication backup arrays and an archiving product, and software enhancements for the Symmetrix VMAX enterprise SAN array.

You can find details on the VNX systems at SearchStorage.com and the Data Domain systems at SearchDataBackup.com. VMAX enhancements include a new version of FAST (Fully Automated Tiering Software) – now called FAST VP (Virtual Pools) — and new operating system software that EMC claims can double performance with no hardware upgrade.

EMC said its Symmetrix Enginuity OS delivers twice as many OLT transactions and Decision Support System (DSS) queries as the previous version. The new OS allows up to five million virtual machines on one VMAX, and EMC claims new VMware API support provides 800% faster management and provisioning, and 300% faster replication than VMAX supported before the enhancement.

EMC also gave VMAX new Federated Live Migration software built into the array — claiming that it allows technology refreshes with zero application downtime — and added native 10-Gigabit Ethernet support and built-in RSA Data Protection Manager to encrypt data at rest.

One thing EMC did not include in its releases this morning was any mention of its long-time storage partner Dell. The EMC-Dell relationship has been on the rocks since Dell tried to buy EMC rival 3PAR and then did purchase another rival Compellent last year. When Dell announced the Compellent deal in December, its executives said Dell would continue to sell Clariion, Celerra and Data Domain products. VNX replaces the Clariion and Celerra platforms, and the VNXe “channel optimized” SMB system is a direct competitor to Dell’s EqualLogic iSCSI SAN products.

EMC CEO Joe Tucci gave a state of EMC address to kick off the live product launch, summing up the vendor’s new product line and overall strategy as the “intersection where cloud – IT as a service – meets enterprise data meets big data.”


January 17, 2011  1:46 PM

Storage specialists giving way to IT generalists

Randy Kerns Randy Kerns Profile: Randy Kerns

When I meet with IT personnel or speak at events, I always try to find out what people are doing in their positions in IT. Finding out what their daily work entails gives a good barometer on what is happening in IT and helps to identify where the problems are. One trend I notice is there are fewer storage specialists these days. A storage specialist is someone who understands how the storage systems work, how data flows through IT operations, and how to manage the information.

There is a great deal of tribal knowledge that the storage specialist learns, especially around Fibre Channel. This knowledge may be critical to maintaining operations, but this is changing as storage specialists give way to a growing number of IT generalists.

There are several reasons why IT generalists are replacing storage specialists. A generalist may have more flexibility because he or she can work in many areas. The generalist probably is not as well paid as a storage specialist would have been on average (and the generalists appear to be much younger than storage specialists I’ve known), so. IT managers have consciously developed the generalist to provide the resources that can be applied where needed.

But the movement to server virtualization has done more to develop the IT generalist than probably any overt IT management plan. The management of the virtual machine operating system – specifically VMware – is closely linked to storage. VMware has included storage management functions and has continued to improve those capabilities. This has led to the task of storage administration increasingly being included with the server virtualization function. Evaluator Group has an article on simplifying VMware storage management and the emergence of storage and server management for virtual environments.

A question to ask is whether the trend to IT generalists at the expense of storage specialists is a detriment to IT or not. The answer is not simple. Storage system vendors have recognized this trend and reacted by greatly simplifying the configuration and administration of storage systems. They have intentionally targeted the IT generalist with the system management software. Movement to the IT generalist may be considered a good idea, but that really doesn’t matter. It is happening anyway.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: