Storage Soup


February 9, 2011  2:10 PM

EMC prepares block-level primary deduplication

Dave Raffo Dave Raffo Profile: Dave Raffo

Although it may have seemed like it, EMC didn’t actually upgrade its entire product portfolio during its much-hyped product launch last month. During an investors’ event webcast Tuesday, EMC executives said they have more product upgrades coming at EMC World in May, including primary block-level data deduplication.

EMC president Pat Gelsinger called primary block dedupe the missing piece in one key part of EMC’s strategy — aligning storage with VMware.

“There is one feature NetApp has that we don’t have – primary block deduplication,” EMC president Pat Gelsinger said. “We’ll have that in the second half.”

Gelsinger said block primary dedupe is just one piece of the data reduction picture, but the rollout will “close that last piece of competitive gap that we don’t have.”

EMC is the market leader in backup deduplication with its Data Domain and Avamar products and offers compression for primary data, but NetApp beat it to the punch with primary dedupe across its FAS storage arrays. Other vendors are also working on delivering dedupe for primary data. Dell acquired the technology when it bought Ocarina last year but hasn’t delivered a timeframe for integrating it with its storage systems. Xiotech and BlueArc have signed OEM agreements with Permabit to use Permabit’s Albiero primary dedupe software. IBM is going the compression route for primary data with its Storwize acquisition. EMC’s primary dedupe announcement may pressure its competitors to move forward with their products.

Gelsinger said other products coming this year include VPlex with asynchronous replication for distances and a Symmetrix VMAX refresh, which he said will be “expanding down and across emerging markets.”

Whatever products EMC delivers this year, you can expect the vendor to put a cloud twist on them. CEO Joe Tucci made the cloud the big theme of his presentation Tuesday.

“We did not invent the cloud, but we recognized the opportunity of the cloud early,” he said. “We believe this will be the next big game-changing trend in the IT industry. It’s a wave of disruption, and I don’t believe there will be an IT segment that will be immune to this wave.”

February 7, 2011  7:49 PM

Violin gets funding from Toshiba, Juniper; HP too?

Dave Raffo Dave Raffo Profile: Dave Raffo

Flash memory array startup Violin Memory today said it received $35 million in funding, naming NAND flash maker Toshiba America and networking vendor Juniper Networks as investors. There were other investors not named, including one that sounds a lot like Hewlett-Packard.

Violin’s press release said Toshiba and Juniper were investors, “along with other corporate partners, crossover investment funds, high net worth industry leaders and private equity general partners.” In an interview with StorageSoup, Violin Memory CEO Don Basile said the other investors include a “large storage and systems company that doesn’t want to be named.”

Basile wouldn’t confirm or deny if that partner was HP, but said the investor was a company that Violin has worked with. HP last June posted blazing fast online transaction processing (OLTP) benchmarks using four Violin Memory Arrays with HP ProLiant Blade Servers, marking the first time HP used another vendor’s memory system in that type of benchmark. There have been whispers as well as published reports in recent weeks that HP and Violin are working on deepenning their relationship.

Basile said the new investors will lead to partnerships that will boost Violin’s distribution. He said two major trends are poised to drive Flash storage implementation. He credited Oracle CEO Larry Ellison with identifying the first trend last year when he said Oracle would use flash to speed up its databases. The second trend Basile sees is for flash to provide a foundation for building private clouds. That’s where Juniper comes in.

“Juniper exists in the highest end of data centers, and paired with a memory array like Violin’s, it can enable private clouds to be built,” Basile said. “We have [flash] in processors, and now we’ll have higher performance networking and I/O tiers to match those processors. This is something that will start to be talked about this year, and in 2012 people will start to roll out cloud services based on these technologies.”

Basile said the funding will allow Violin to double its headcount this year, mostly by expanding its sales and engineering. The company current has just over 100 employees. He also promised new product rollouts in the coming months. Violin’s current products include a 3200 Flash Memory Array, a 3140 Capacity Flash Memory Array, and a vCache NFS caching device built on technology it acquired last year from Gear6.

Violin and Toshiba announced a strategic partnership last April, disclosing that Violin is using Toshiba’s NAND flash chips.


February 3, 2011  6:36 PM

New WhipTail CEO anticipates rise of solid state, demise of spinning disk

Dave Raffo Dave Raffo Profile: Dave Raffo

Solid state storage startup WhipTail Technologies is looking to move into its next phase of development after hiring a new CEO this week.

Dan Crain, whose 27 years in the IT business includes five years as CTO of Brocade, takes over at WhipTail with the belief that the spinning disk era in enterprise storage is coming to an end. Crain said it’s a matter of time until the right type of solid storage takes over, and he believes WhipTail’s I/O acceleration appliances are on the right track.

“Without a doubt, we’re getting to the end of spinning media and disk,” he said. “It’s been around for 40 years, and the physics are getting more challenging. Solid state as a medium for persistent storage is getting there.”

Crain said he knows just having solid state in a storage system isn’t enough. All of the major storage vendors do that now, and smaller vendors such as Nimbus Data, Texas Memory Systems, Avere Systems, Violin Memory, and Alacritech have all-solid state disk (SSD) systems.

Whiptail’s Data Center XLR8r and Virtual Desktop XLR8r 2u multi-level cell (MLC) appliances use NAND and DRAM, and focus on block storage, particularly applications with high I/O requirements. Crain said he sees all-SSD appliances the way to go rather than hybrid systems that use solid state and spinning disk as separate tiers. He said the market will take care of one SSD hurdle – price – while superior technology can overcome reliability issues.

“A lot of tiered systems exist because the cost of solid state storage media was very, very, very expensive,” Crain said. “The price has fallen substantially now. With Flash-based disk, NAND-memory based SSDs, the more you write to them, the more they wear out. We have patented technologies that help deal with that. Hybrid systems that move things in and out of SSDs cause write wear. We don’t think that’s the way to go.”

Crain said the Data Center appliance is tuned around database functions such as transaction logs and indexes, while the Virtual Desktop model is tuned for virtual desktop performance loads. “We’re not going to be the production data store for databases, we will be the index and transactional log storage device,” he said. “We keep system files, not user data.”

Since it started in 2008, WhipTail has been run by founders Ed Rebholz and James Candelaria. Crain replaces Rebholz as CEO, while Candelaria stays on as CTO. Crain said WhipTail is still in the early stage of product sales with “several dozen” customers, and he expects to rollout product enhancements soon.

“We have a lot of growth to do over the next two or three years and you’ll see us produce a lot of interesting technology,” he said. “Some of it will be very radical.”


February 2, 2011  6:34 PM

Mozy customers balk at cloud backup price increase

Dave Raffo Dave Raffo Profile: Dave Raffo

When EMC’s Mozy told customers this week it was raising prices on its home cloud backup service, company execs said the hike for most customers would be $1 per month. That would come to $5.99 for 50 GB of online backup.

Well, scores of angry MozyHome customers say 50 GB won’t come close to cutting it for them. The support forum on Mozy’s web site has more than 50 pages of posts from disgruntled customers who claim their costs would rise drastically under the new plan, calculating they would have to pay up to thousands of dollars a year to continue with Mozy. Many customers who have posted on the Mozy site said they have discontinued the service, with others threatening to do so.

Mozy reps said the price increase is necessary because customers are storing more photos and videos online, and backing up multiple computers.

Instead of charging $4.99 for unlimited backup, Mozy’s new entry level pricing is $5.99 for 50 GB on one machine. Mozy suggested most of it customers would find that plan satisfactory. Customers who need more online storage must pay $9.99/month for 125 GB on up to three devices, plus $2/month for each additional device and $2/month for each additional 20 GB of storage.

The new price goes into effect immediately for new customers. Existing customers can keep the unlimited pricing until the end of their current contracts.

Mozy CMO Russ Stockdale said the company hasn’t changed pricing for its MozyPro (business) service, because its customers store mostly business files and not photos and videos.

“On the consumer side, pressure’s been there because of large files,” he said. “We feel like this is something the industry has to come to terms with.”

He said competitors and other cloud backup providers have size limits on files they will back up, or throttle bandwidth to keep prices down. “We’ve concluded those aren’t the right was to do it,” he said. “We provide the same quality of service across all file types and storage allocations.”

Stockdale also said Mozy surveyed customers before making the price increase and got the impression they thought the hike was fair.

Many customers obviously do not think so. Here’s a sampling of comments from the Mozy forum:

“I almost laughed when I got the email about the new plans from Mozy. Anyone that knows enough about computers to back up their files remotely has way more than 50gb worth of stuff worth backing up.
I understand that, in any field, unlimited plans eventually go away. But usually companies do so by replacing them with a capped plan that’s still virtually unlimited for most users. We’ve seen this with cell phone carriers and ISPs capping their unlimited plans at heights that only effect the top 0.01% of users.

But here, going from unlimited storage to 50gb of storage in a time when you can’t even buy a 50gb hard drive anymore because it’s so small? That’s a ballsy move.

Going from unlimited to 50gb would have been a fair move 10 years ago. Now, not so much.”

“Someone else made a great point earlier in this thread that it is sad that my iPod can backup more than 50gb! If the folks at Mozy think 50gb of data is a reasonable “average user” threshold to set their baseline, they must still be living in 2003.”

“I thought Mozy was great, but I have over 600GB backed up on them, that’s over a $1,200 a year they want for that with the new pricing plans. I was debating between cloud storage and building my own server … that decision is a lot easier now. I can build a server at a relative’s house and FTP backup and still save money, or I could just run it from my house with a UPS in my detached garage for safety if I was that paranoid. … For $200, I could get 2tb backup with a dock, that’s a lot faster than Mozy and no constant resource drain.”

“This new pricing structure is ridiculous! I currently have 275 gigs backed up and this will jump my monthly fee from $4.5 to $30!! I can buy a small fireproof safe and a good external for that kind of money. I understand that things have changed since you started, but gradual changes or a better pricing structure might have worked better. Hitting loyal customers with a huge increase is just going to drive them away to other alternatives.

Only 50GB?! Seriously – only casual computer users only have 50GB of files and media, and those type of users aren’t savvy enough to even think to backup their files.”

“… With this new plan my fee would increase from $210 for 2 pc’s for 2 years to $1680. No way Mozy. My need for storage increase with 100-200 GB per year so if I renewed now the next renewal would run up to a staggering $2500! No way is that going to happen.

So my plan is to buy two external 2 TB hd’s and use them for offsite storage at my workplace. I will backup to one and the other will be offsite and once a month I will switch them. And just to clarify my online backup is NOT my only backup it is just a convenient offsite backup.

And for the people that are considering a competitor please remember that if you have more than 200 GB then your upload speed will be reduced to only 100 kbit/s. So for me they are not really an option. It took long enough to complete the initial backup here on mozy with a 2 mbit/s upload line.”


February 2, 2011  3:58 PM

CommVault CEO: Industry has shifted to snap-based backups

Dave Raffo Dave Raffo Profile: Dave Raffo

CommVault CEO Bob Hammer pointed to his company’s strong sales results last quarter as a validation of Simpana 9 released in October and a change in the way organizations approach backup today.

CommVault Tuesday reported revenue of $84 million last quarter, up 18% from last year and 11.2% from the previous quarter. The results were a big jump from two quarters ago when CommVault slumped to $66.3 million in revenue when it missed its projections by a wide margin. CommVault also seems to have gained on market leader Symantec, which last week reported its backup and archiving revenue increased five percent over last year.

While Simpana 9 drew a lot of attention for adding source deduplication to the target dedupe in the previous version, it also leans heavily on replication and snapshot technology to create recovery copies of data without moving the data. These technologies were cited by Gartner in its most recent Magic Quadrant for Enterprise Disk-Based Backup/Recovery that placed CommVault at the head of the leaders group.

“The old backup methodology of managing the backup stream is no longer efficient,” Hammer said. “The backup copy today is your persistent long-term archive copy. The industry has shifted into managing data using snaps and replication as the primary way of managing and moving data. We can seamlessly create a long-term non-corrupt point-in-time copy. The backup business is now a more comprehensive data management business.”

Hammer also said CommVault will not follow Symantec’s lead of selling its backup software on branded appliances. CommVault sells Simpana on hardware from partners such as Dell, and Hammer said he will continue with that strategy.

“We don’t want to get into the hardware business, for a lot of reasons,” he said. “At least not on our own. We will provide our software to partners who embed it and sell it on hardware.”

CommVault’s sales spike last quarter came despite a drop in revenue from its Dell OEM deal. Dell accounts for 20%of CommVault revenue, but it decreased 13% year over year and 10% sequentially last quarter. Hammer said that was in part to a drop in sales to federal government, which make up a large part of its sales through Dell.

As for cloud backup, Hammer said CommVault is seeing large deals through managed service providers (MSPs) while enterprises are taking a slow approach. “But whether enterprises are going to deploy to the cloud yet or not, cloud capability is one of the requirements they’re looking for,” he said.


February 1, 2011  1:16 PM

Data center transformation and optimization aren’t the same

Randy Kerns Randy Kerns Profile: Randy Kerns

Two topics are of top of mind with IT managers and vendors today: data center transformation and data center optimization. These topics come up all the time in my discussions with CIOs and IT managers in regards to their initiatives, and also with vendors in their strategies to deliver solutions to meet demands.

Unfortunately, some people confuse the meaning of the two initiatives. They are similar, but different. There is a misunderstanding about how they should be applied, and that leads to discussions that start down the wrong track and can take time to rewind and re-focus.

Let me put some perspective on how the terms should be used, and how they will evolve over time. Data center transformation is a big picture vision about changing the premise of how a data center operates. Think of it as clean sheet of paper discussion about IT. The discussion is really about how the services of compute and storage should be delivered to customers in the data center. The customers could be departments, individual users, or even other companies.

Discussions about cloud computing have transitioned into data center transformation discussions for most IT professionals. These cloud discussions include public clouds where compute and information storage is done by a service provider, private clouds where the data center delivers its services as cloud-like offering, and hybrid clouds where some services and information storage are done on-premise within the data and some utilize a public cloud.

For IT professionals, data center transformation requires putting together a new strategy for providing services for today and the future – usually focused on providing IT as a service (ITaaS). For more on data center transformation, see Evaluator Group articles here and here.

Data center optimization is focused on making the data center more efficient. The optimization may refer to deploying and exploiting new technologies and methods. Data center optimization is approached as an overall goal but is broken down into specific areas. Storage efficiency is one of those areas for optimization that has many elements.

Data center optimization should include a rigorous and believable measurement system to for making decisions and demonstrating results. Individual areas generally use ROI methodology for the measurements. Overall, a broad TCO calculation is used to show the effect of multiple initiatives.

For data center transformation and data center optimization discussions, there are differences in time scale, the decision making process, the amount effort involved, and the costs. Transformation is a company-wide directional decision with long-term implications. Consequently, the decision time may be lengthy. Optimization, primarily broken down into individual projects as elements of a larger goal, will have shorter cycle decision processes and be put into effect (at least partially) much sooner. These are important differences to take into consideration.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 27, 2011  1:31 AM

Quantum upgrades dedupe software, while sales miss mark

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum today said it is doubling the speed of its data deduplication software, at the same time it admitted sales of large disk backup devices with dedupe last quarter were disappointing.

Quantum’s DXi 2.0 software will have the same dedupe ratio as the original DXi version, but Quantum claims it has the fastest throughput of any dedupe software on the market. The vendor also eliminated the option to dedupe post-process and failed to add global deduplication capabilities.

The vendor made its 2.0 release at the same time as it reported earnings, falling well short of its revenue guidance mainly because of disappointing enterprise deduplication system sales.

Quantum said a DXi4500 SMB appliance running DXi 2.0 software will deduplicate at up to 1.4 TB/hour for NAS and 1.7 TB/hour for the Symantec OpenStorage (OST) protocol, and a midrange DXi6500 with version 2.0 will dedupe at 4.3 TB/hour for NAS and 4.6 TB/hour for OST.

The DXi software upgrade follows enhancements by EMC Data Domain and Sepaton to their deduplicatio products over the last nine days.

The first version of DXi let customers choose between inline and post-process deduplication, but DXi 2.0 only supports inline dedupe. Quantum SVP Janae Stow Lee said with the speedier software plus more powerful processors, inline dedupe is fast enough to negate any advantage of post-process. DXi 2.0 still does not support global dedupe across systems. Software deduplication products support global dedupe, as does Quantum hardware rivals Sepaton, Data Domain and IBM for at least two nodes.

“This is not a clustered system,” Lee said. “There are advantages, but also complexity and cost disadvantages of clustered systems. We have a different strategy for that over time.”

Quantum will begin selling 2.0 software on its SMB DXi4500a and midrange DXi 6500 systems by March, and it will be available on higher end DXi6700 and DXi8500 hardware over the summer. DXi 2.0 will have the same price as the 1.0 software and will be a free upgrade to existing customers.

Quantum today reported revenue of $176 million for last quarter, which fell below its guidance of $185 million to $200 million. Quantum executives said the vendor did have record sales of its midrange DXi systems and StorNext file system software and gained share in tape, but had trouble closing large deals with its new DXi8500.

“We continue to make progress on our growth strategy, but not as much as some people expected,” Quantum CEO Rick Belluzzo said of the revenue miss. “The biggest factor was the enterprise business was substantially down because of deal flow and a transition to the 8500. We had a lot more [DXi] deals, but a lot of small ones.”


January 25, 2011  5:13 PM

Cloud storage gateway startup Cirtas gets funding, new CEO

Dave Raffo Dave Raffo Profile: Dave Raffo

Three multi-billion dollar storage acquisitions over the past two years have made storage a hot target for venture capitalists, especially startups who deal with moving data to the cloud.

Cirtas today unveiled a new CEO as well as $22.5 million in a second round of funding. Cirtas is among several cloud storage gateway vendors who launched over the past year or so, and they have been busy with funding. Nasuni ($15 million) and Panzura ($12 million) closed funding rounds last month and StorSimple ($13 million) received funding in September. But with all of the large storage vendors also looking to the cloud, it’s unlikely that the market can support so many startups.

Cirtas CEO Gary Messiana said it will take more than the ability to move data off to the cloud for one or more of these startups to stand out. He said it is the intelligence in the Cirtas Bluejet Storage Controller that is unique. BlueJet’s controller presents an iSCSI target to servers as if it were an array on a local SAN. It handles encryption, tiering, data reduction and snapshots as well as sending data off to the cloud. Blujet is used for backup data, along with tier 2 and tier 3 primary data. Customers can keep data on the appliance or move it off to cloud service providers such as Amazon, which was a first-round investor in Cirtas.

“We have cache on our box, we have disk storage on our box, and our algorithms and sophisticated intelligence determines whether each file should reside in cache, the disk array, or we should move it off to the cloud,” Messiana said. “All of our customers are using us to put data into the cloud. Not all of [the data] obviously, but the portion that makes sense.”

Messiana won’t say how many customers Cirtas has yet, but he said the new funding will be used to beef up engineering , sales, and marketing for the Bluejet product. He replaces founder Dan Decasper, who remains with Cirtas as CTO. Messiana came from Cirtas investor Bessemer Venture Partners, where he was an entrepreneur in residence. He has also been a CEO at Netli and Diligent Software Systems.

He said he learned a lesson at content delivery company Netli that should help him forge a strategy at Cirtas. “I saw that no large content owner wants to go to a single CDN [content delivery network],” he said. “We believe it will be the same thing for the cloud. We believe large enterprises will want to use multiple providers. Customers don’t want to get locked into a single back end.”

For that reason, he said, Cirtas will work closely with other cloud providers besides Amazon S3. Its other cloud service partners include Iron Mountain, EMC Atmos and AT&T Synaptic Storage as a Service.


January 24, 2011  4:11 PM

Hosts vs. servers in open systems storage

Randy Kerns Randy Kerns Profile: Randy Kerns

While working on a product analysis for the recently launched EMC VNX multiprotocol storage systems, I picked up on something I’ve also seen in other recent vendor presentations.

I’m seeing the word host commonly used to refer to the computing element that storage connects to, either directly or through a SAN. The word host takes me back to the mainframe terminology that has been around for so long.

The mainframe world has an entire lexicon of terms that are different than those used in the open systems world. A while back I made a list that translated the terms between mainframe and open systems for a storage systems class I was teaching. That list became popular and many people asked for a copy (if you would like a copy, send an email to info@evaluatorgroup.com).

In open systems, people used the term servers instead of hosts. Now I’m seeing vendors use the term host in documentation and presentations for open systems. There are two reasons for this:

First, vendors generally define a host as a server running an application. This seems to be the most common sense definition. Second, the word server is really getting a broader meaning today. There are a number of reasons for this, specifically in the storage systems world.

With many storage solutions today, the storage function is really an application running on a standard server. It is a special application, usually with specific hardware requirements. It is often referred to as a server with the storage application. These systems may be characterized as appliances or even a software package that can be deployed on a server. This usage clouds the meaning of the term server, and requires a further explanation to determine which server is being referred to.

The standard client/server model that once defined open systems may no longer easily map to the definitions used where a server is running an application and has storage attached to the server. So when referring to a server running an application and storage that is critical to the operation, the term host can make it clearer. And, I suspect, that’s the main reason we’re seeing the term used that way so often now.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 21, 2011  8:19 PM

Data miner strikes gold with DRAM storage appliance

Dave Raffo Dave Raffo Profile: Dave Raffo

We’ve seen that these are still early days for solid-state and Flash adoption in enterprise storage, and people are still trying to figure out the best way to implement the technology. That leaves the door still open to new approaches.

One of those new approaches is that of Kaminario, which in mid-2010 came out of stealth with a DRAM-based solid state storage appliance that it claims can provide faster access to data in key applications. Kaminario also uses hard drives in its K2 appliances, which consist of blade servers with redundant hard drives, Fibre Channel switches and redundant UPS. Its KOS operating system controls load balancing of data across the DRAM of each Data Node, making the DRAM of the entire system look like a single high-speed disk to the application.

Now that K2 appliances have been on the market awhile, Kaminario has been identifying customers using them for a performance boost. One of them is Digital Trowel, an Israeli based Web data mining company that sifts through Internet records to find relevant data quickly for customers. Digital Trowel CTO Anton Bar said it took more than a week to crawl five billion database records for his customers, find errors and correct them with EMC Clarrion SAN arrays. Since adding a K2 appliance, he said it now takes three days to mine those records.

“The bottom line is, our identity resolution process was shortened by about 50 percent, and that’s very important in our line of business,” Bar said.

Bar said he considered several solid state approaches, including adding SSD drives to his EMC Clariion array, going to an all-flash SSD appliance and using Flash on PCIe cards. He tried SSDs in his array first, but said that didn’t give him the performance increase he needed. K2 appliances start at $50,000, and Bar said that was a bargain compared to other methods.

“In addition to simply shoving flash discs into our Clarion, which didn’t improve the throughput at all and was terribly expensive, we considered also the Texas Memory Systems RamSan products,” he said. “However, they had the same price, half of the storage space and lower speed [than K2] — a clear no-brainer.  We also considered Fusion-io ioDrive Flash cards – but they weren’t fail proof. There was (no redundancy at all.”

The K2 appliance may not win out over other approaches in all cases, but it shows that solid state storage options are still expanding.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: