Storage Soup

June 14, 2011  7:18 PM

Quantum pockets Pancetera for virtual server backup

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum acquired startup Pancetera Software today, giving it virtual server backup for its DXi data deduplication family right away. In the long run, Pancetera can also add intelligent storage management for virtual environments to Quantum’s StorNext file system.

Quantum paid $12 million for Pancetera, which came out of stealth last August with its Pancetera Unite virtual appliance designed to optimize virtual machine backup. It added SmartMotion software in April, enabling Unite to push data from virtual machines directly to any NAS target without requiring staging servers with dedicated backup software.

By owning the technology instead of forging a partnership, Quantum is looking to develop new products for protecting and managing storage connected to servers running VMware.

“This gives us immediate value for DXi and virtual environments, and it will allow us to have unique roadmap items with DXi,” Quantum CEO Jon Gacek said. “The reason we went with an acquisition instead of an OEM deal is we can combine this with StorNext to develop solutions to manage storage – not just backup, but storage – in virtual environments.”

Gacek said Quantum will immediately offer Pancetera software with DXi systems sold for virtual environments. “We’re in deals now where we know that software will make the difference,” he said.

He said Quantum will also sell Pancetera to existing DXi customers, but he’s not sure if it will make Pancetera software available as a widescale standalone product. “We haven’t decided if we want to enable its value to our competitors’ customers,” he said. He said he expects Pancetera technology with StorNext to hit the market in an appliance for virtual data in 2012.

“Virtualization creates a lot of unstructured data,” Gacek said. “StorNext with Pancetera gives us the ability to get inside of a VMDK, and you can imagine some of the things we might do.”

Quantum tried attacking the VMware backup problem with OEM partner PHD, licensing its esXpress product in 2009. But that relationship fell apart last year. Gacek said the OEM deal didn’t give Quantum enough control over the technology.

Quantum is hiring most Pancetera employees, including founders Mitch Haile (CTO) and Greg Wade (VP of engineering) and CEO Herik Rosendahl.

“The issue for them was, they were going to be hard-pressed to convince companies to buy that kind of software from a startup,” Gacek said. “But the software is complete. We like how it can be supported and it fits well with what we’re doing with DXi.”

VMware is the only hypervisor that Pancetera supports. A Quantum spokesperson said there has been no decision yet on whether it will expand it to other hypervisor platforms.

Gacek, who replaced Rick Belluzzo as CEO in April, said the acquisition and the hire of Ted Stinson as senior VP of worldwide sales Monday shows “we’re in growth mode and we’ll be aggressive about making change.”

June 10, 2011  1:30 PM

Best of need vs. best of breed

Randy Kerns Randy Kerns Profile: Randy Kerns

I was recently teaching a class on storage technology and systems to a group of IT professionals. I’m always interested in finding out what they know about storage products and what they hear about the market.

I discussed with the class a story I recently read about an IT director commenting that he was interested in “best of need” products rather than “best of breed.” His argument was that he wanted a product that fit his requirements and only those requirements. A best of breed product probably had more capabilities than he needed, probably with extra costs. The comment was “why pay more for something I don’t need.”

The other IT people in the class echoed the sentiment and added one more important point. Storage systems have a limited lifespan of four or five years, and in that limited time they may not get to the point of deriving value from those best of breed capabilities. The sentiment was to buy only what you need.

The implications here are significant. Vendors marketing best of breed solutions may be missing the mark with some customers. There is the implicit assumption by customers that a product represented as best of breed will cost more. The other implication is that customers will buy a product with capabilities they may not need because of potential future requirements. But this may not be the case either because of the limited lifespan of storage in the data center.

Understanding customers’ needs and marketing to meet those needs may be a better approach by vendors. They should also highlight how and why a particular product can excel in that environment. Other important considerations for the customer should be addressed as well – such as reliability and support.

The IT professionals I work with continue to impress me. They sort through the messages and focus on their business and what it takes to meet their requirements.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

June 9, 2011  3:28 AM

EMC ready to roll out Symmetrix VMAXe against 3PAR, XIV

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC is preparing to launch a baby Symmetrix VMAX system called the VMAXe, which lacks mainframe connectivity and fills the gap between the vendor’s midrange VNX unified storage platform and the enterprise VMAX. EMC is positioning the new system squarely against Hewlett-Packard’s 3PAR and IBM’s XIV storage systems, other enterprise SAN arrays that are not built to connect to mainframes.

EMC is planning to make the system generally available this month and officially launch it in July. While its customers and partners are still under non-disclosure agreements, we’ve seen EMC documents that lay out the underlying technology, hardware specifications and the vendor’s positioning of the product.

EMC still recommends the VMAX for customers that need more capacity, data at rest encryption, hardware compression, SRDF remote replication or the ability to attach to a mainframe.

“VMAXe gives us a specific competitive advantage against some of the industry’s newer arrays, especially if you have any IBM XIV or HP 3PAR in your accounts,” read an EMC document for its sales team.

The EMC documents say the VMAXe can also compete with higher-end NetApp FAS arrays and entry level enterprise systems from IBM and Hitachi Data Systems.

The VMAxe uses a special build of the Enginuity operating system that powers the VMAX, and is 100% virtually provisioned – EMC’s version of thin provisioning. It supports FAST VP automated tiering and ships factory configured with a base software bundle that includes Timefinder for VMAXe for cloning and RecoverPoint splitter instead of SRDF for remote replication. Open Replicator and Open Migrator software are also available for moving data from competitive arrays onto the VMAXe.

EMC claims a VMAXe can install in less than four hours, and that 1 TB of storage can be provisioned in less than three minutes.

The VMAXe hardware supports up to four engines and 960 drives. An integrated system bay holds one engine and 150 drives, and a fully populated system has two additional drive bays with 180 drives apiece. The VMAX supports eight engines and 2,400 drives. VMAXe uses a quad-core engine while VMAX uses a six-core engine.

Among other differences, VMAXe has 96 GB of memory cache per engine compared to VMAX’s maximum of 128 GB, VMAXe has 64 Fibre Channel and 32 Ethernet ports while VMAX supports twice as many of each, and VMAXe scales to 1.3 PB usable capacity compared to VMAX’s 2 PB.

The VMAXe also comes with pre-selecting drive tiering configurations. A single-tier system is all 450 GB 15,000 rpm Fibre Channel drives, a two-tier system comes with 97% 2 TB SATA drives and the rest 4 Gbps FC 200 GB Flash drives, and a three-tier system has 65% SATA, 32% Fibre Channel and 3% Flash. The system’s host connectivity options include 8 Gbps Fibre Channel, Gigabit Ethernet (GbE) and 10 GbE iSCSI, and Fibre Channel over Ethernet (FCoE).

EMC estimates the VMAXe will cost about 15% to 20% below smaller VMAX configurations and 5% to 10% below a three-engine VMAX. The VMAXe cannot be upgraded to a VMAX.

June 2, 2011  6:29 PM

Is it time for Dell to scoop up Brocade?

Dave Raffo Dave Raffo Profile: Dave Raffo

Whenever there is talk of storage acquisitions – which is just about all the time these days – the name of Brocade comes up. The switch vendor was believed to be high on Hewlett-Packard’s shopping list until HP bought Ethernet switch vendor 3Com instead in late 2009. Since then, talk surfaces every so often that Dell might pick up Brocade for its Fibre Channel and Ethernet networking gear.

Canacord Genuity financial analyst Paul Mansky raised the issue again today when he put out a report suggesting that Dell may be ready to drop as much as $5.5 billion on Brocade. Mansky wrote the acquisitions of EqualLogic, Compellent and Perot Systems gave Dell storage, servers and services but left it without the networking piece of the IT stack. Outside of some low-end Ethernet switches it develops, Dell gets most of its networking devices through OEM deals with Brocade and Juniper Networks.

Dell is looking to move from a PC-centric company to an enterprise player. “ … given [that] networking will most likely be among the most critical sources of intelligence helping to re-shape the horizontal/physical layers into a virtual/vertical stack, not owning this [networking] technology puts Dell at risk of simply hopping from one commoditized business into another,” Mansky wrote.

Mansky maintains Brocade is the right target for Dell because Juniper is mostly a service provider and that is not Dell’s business, while other alternatives such as Extreme Networks and Force 10 don’t have enough market share to be worthwhile. Brocade is also the only vendor among those to have a foothold in storage. “Brocade owns Fibre Channel (70% share), exceptionally tight support for which is a must (our view) in a converged world,” Mansky wrote. “Net, Fibre Channel is high ROI, legacy Ethernet is low investment and converged products (recently introduced) are the growth engine.”

Mansky believes Dell should act now as the PC market is expected to decline at a faster rate and Dell has $7 billion in cash. He said a price of $10 per share is possible for Brocade, bringing the deal to $5.5 billion.

There is a risk for any storage vendor that buys Brocade. Such a deal could prompt competitors to push sales of Cisco FC switches, taking away much of Brocade’s revenue. A year ago, that risk was less for Dell because its partner EMC could be counted on to continue to support Brocade as well as Cisco. But the EMC-Dell storage partnership has fallen apart and EMC is a close ally of Cisco. However, as Mansky pointed out, Cisco has angered IBM and Hewlett-Packard (as well as Dell) by getting into the server business. That makes those vendors more likely to stick with Brocade for its storage products, which make up most of its revenue.

May 31, 2011  7:36 PM

NetApp, CommVault forge OEM deal around SnapProtect, tape

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and CommVault today said they have signed an OEM deal that lets NetApp sell and brand CommVault’s SnapProtect to move replicated data to tape.

SnapProtect is part of CommVault’s Simpana 9. It handles Simpana’s reporting, scheduling, indexing and cataloging of array-based snapshots. SnapProtect uses a unified catalog to track backups across disk and tape while NetApp’s SnapVault only handles disk-to-disk replication.

Along with extending SnapMirror’s capabilities to tape, NetApp’s SnapProtect can manage its SnapMirror replication policies. NetApp is pricing SnapProtect based on storage capacity and the number of controllers used. NetApp is also licensing and trademarking the SnapProtect name, which will no longer be used by  CommVault in future releases of Simpana.

“When we add Simpana 9 management capabilities to SnapVault and SnapMirror, it lets us do workflow management for disk-to-disk-to-tape backup,” NetApp director of data protection solutions Mark Welke said. “It also gives us the full catalog capability that goes with it. We did not have a seamless solution for tape.”

Welke said SnapProtect will let NetApp customers create local snapshots and restore data from primary storage. “We’ll move less data [through CommVault’s deduplication] and move it faster,” he said.

CommVault and Syncsort are the only backup software vendors that can manage NetApp snapshots, Welke added.

Welke and CommVault SVP of business development Dave West said the deal is not exclusive. That means CommVault can strike similar deals with other area vendors (it has OEM relationships with Dell and Hitachi Data Systems) and NetApp can add other software partners. But it’s interesting that NetApp chose CommVault as its partner here instead of backup software market leader Symantec.

“The state of data protection is broken,” West said. “We’re trying to solve problems associated with legacy solutions. Unless you integrate with array-based technologies, you won’t be able to solve real data protection problems.”

The deal is seen as a potentially significant sales boost for NetApp, which often wins high marks for its technology but holds a small piece of the backup market share. Nearly one-quarter of CommVault’s revenue comes from its OEM deal with Dell, but it is trying to expand its partnerships with array vendors. CommVault and Hewlett-Packard have been working closely in Europe, and CommVault also has OEM deals with Fujitsu and Bull.

May 31, 2011  1:28 PM

Don’t turn data archiving into data hoarding

Randy Kerns Randy Kerns Profile: Randy Kerns

Archiving has the potential to change individual behavior, depending on how it is implemented and introduced to data owners. An archiving system can prompt data owners to alter the way they handle their data, and this could unintentionally circumvent the reasons for the archiving system.

To illustrate this, let’s take a real example from a company that will not be identified here, and a specific user in that company. The company instituted a new policy that data stored on primary storage systems would be archived if it had not been accessed in six months. After the data is moved, the user would have to submit a request form to access it. There was no time guarantee for the restoration, but the first retrieval for others had taken two weeks. This included the time it took to process the request and retrieve the data. (The “processing” of the request was suspected to be the time consuming area here).

This is where behavior modification came in. Realizing it was not easy to get data back, the user turned into a data hoarder. Some data believed to be vital to refer to in the future was stashed elsewhere – on storage areas that were not scanned for archiving. Other data was copied to removable media and put in a desk drawer. This was done to avoid the delay in getting data back when needed, and the potential inability to get the data at all due to technical or administrative breakdowns.

Obviously, data hoarding defeated some of the reasons for archiving. The data did get moved off primary storage but was placed on other tiers besides the archiving tier. Other data was copied and put into a desk drawer, which could become a security risk.

The real issue is the way archiving systems work and how the process is presented to the data owner. The archiving systems need to allow the owner to see his data and access it when necessary in a reasonable amount of time. There has to be some confidence that data just won’t go away.

Creating a successful archiving environment requires systems that meet the need and an approach that works for the company and users. If not, behavioral change is a probability. Data hoarding is a basic instinct.

There’s more on archiving here.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm). 

May 26, 2011  12:52 PM

NetApp CEO: server vendors can’t keep up with storage innovation

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens says the battle for storage supremacy is increasingly becoming a two-horse race between his company and EMC.

During NetApp’s earnings conference call Wednesday, Georgens painted large acquisitions such as Hewlett-Packard’s buying 3PAR and Dell grabbing Compellent last year as attempts by those vendors to catch up on innovation. Despite those deals, NetApp and EMC continue to take market share.

NetApp’s product revenue for last quarter increased 27% year over year – at least twice the storage revenue growth of server vendors HP, Dell and IBM. EMC, which has more overall revenue than NetApp, grew its storage product revenue around 17.5% over last year in its most recent quarter.

“Nobody buys storage from a server vendor unless they also buy their servers,” Georgens said. “And they’ve all lost a lot of ground, so they’re effectively re-loading by acquiring companies out there that actually have innovated.

“Companies like NetApp and EMC – as much as a hate to say it – are actually going to be the innovators because I think we’re going to be the only guys who are able to sustain the level of investment to stay competitive.”

He did say some customers still prefer best of breed, and that’s why NetApp is working on integrated stacks such as its FlexPod architecture with Cisco and VMware, and partnering with others such as Microsoft, Citrix, SAP and Accenture.

Georgens said EMC’s recent VNX unified storage system hasn’t changed the competitive dynamic between EMC and NetApp, but “it actually creates separation of us and EMC from the rest of the pack … it’s going to be more and more difficult for the rest of the storage industry to keep pace.”

He said EMC’s recent Project Lightning announcement involving server-side flash in a PCIe card underscores the importance of flash in storage, a technology NetApp has embraced with its storage-side Flash Cache card.

“I expect there will be flash in large quantities deployed in servers,” he said. “And I think there will be various ways of utilizing it, some simply as cache, some as permanent storage. I expect flash to be a big deal, and I think the Flash Cache has been very effective for us. And we’ve got aspirations to do more with it over the next couple of years.”

One server vendor NetApp is watching closely is IBM, because it is both a partner and competitor. IBM sells NetApp’s flagship FAS storage systems as well as the Engenio storage line that NetApp acquired from LSI this month for $480 million. But IBM also internally develops storage products that compete with NetApp’s.

“IBM has aspirations to have products in this space, and they’ve had that all along,” Georgens said. “The desire by their internal groups to make their own products makes the positioning very complicated. And are we happy with the positioning? No. On the other hand, our engagement with IBM’s customer facing groups, the people who actually have to put solutions in front of customers, is actually exceptionally strong. If we continue to out-innovate them and introduce products to market faster, then we’ll preserve the business.”

As far as more acquisitions, Georgens said NetApp is more likely to make smaller ones such as it did with Akorri and Bycast over the past year than large deals such as Engenio, but “when it’s the right transaction at the right price and appropriately strategic, then we’ll move ahead.”

Georgens said NetApp would refrain from selling Engenio products through its own channels that compete with those systems sold by OEM partners such as IBM and Dell. He said the relationship between NetApp and Engenio’s OEM partner Teradata could help NetApp move into the data warehousing market. He said the two vendors do not compete have complementary technologies. “I’m optimistic about that one,” he said of Teradata.

May 20, 2011  12:58 PM

Symantec spends $390M for Clearwell, discovery

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec took a big plunge into the e-discovery market Thursday night when it acquired Clearwell Systems for $390 million, giving it technology that will complement its Enterprise Vault email archiving application as well as other products in its backup, security and data management portfolios.

Clearwell’s e-discovery suite handles the collection, identification, preservation, processing, review, analysis and production of data. On a conference call to discuss the deal, Symantec CEO Enrique Salem said Clearwell’s suite of products goes far beyond the Discovery Accelerator module for Enterprise Vault.

“Our Discovery Accelerator was a bolt-on technology to Enterprise Vault,” Salem said. “We clearly saw that we had a part of the discovery process, but not the breadth that Clearwell is doing end-to-end from the initial stage of gathering the information through the final production. We needed to add the significant steps of analysis and review. That process is where we found the Clearwell technology excels. This extends our ability to preserve it, analyze it, and review it.”

Symantec acquired Enterprise Vault product when it bought Veritas, which had acquired Enterprise Vault vendor KVS in 2004. Salem said since then, customer focus for email archiving and management “has shifted from quota management to help me with the discovery process.”

Clearwell’s eDiscovery product has four modules: Legal Hold, Identification and Collection, Processing and Analysis, and Review and Production. It provides features such as transparent keyword and concept search, legal hold management workflow, the ability to collect information from multiple data sources, advanced processing capabilities such as deduplication, filtering and content analysis, linear and non-linear review, and a reporting and auditing trail for case management.

The Clearwell product portfolio and its team will become part of Symantec’s Information Management Group (IMG). Symantec intends to continue selling Clearwell’s products on a standalone basis as well as in connection with Enterprise Vault. Clearwell and Symantec already have a technology partnership that lets customers use the two vendors’ products in an integrated fashion. Brian Dye, Symantec’s VP of product management for the IMG, told StorageSoup that Clearwell technology would also be used with Symantec’s NetBackup, Data Loss Prevention (DLP) and Data Insight products.

“This is about a broader strategy to secure and manage information,” Dye said. “It’s not just about combining Clearwell with Enterprise Vault.”

Enterprise Strategy Group analyst Katey Wood said eDiscovery software has become more important over the past few years following changes to the Federal Rules of Civil Procedures (FRCP) that requires IT to become more involved in requests for legal information. The rapid growth of electronic data makes good archiving and discovery capabilities even more important.

“It’s become more and more expensive to outsource that to service providers and have attorneys review all things that are relevant,” she said. “There’s a big movement to bring software in-house so companies can do the process themselves. They can do cut down and filter information before sending it off to lawyers.”

Gartner placed Clearwell in the leaders category of its eDiscovery Magic Quadrant released last week, with Symantec going in the challengers section.

Clearwell claims more than 400 customers and around 200 employees, and said it had around $56 million in revenue over the 12 months ending March 31. The deal is expected to close around by September.

Clearwell also makes its eDiscovery technology available through service providers, and Symantec executives said they would probably continue down that path.

In 2009, EMC acquired Clearwell rival Kazeon for a price that was reportedly below $100 million.

May 18, 2011  4:13 PM

Storage purchasing decisions should focus on functionality, economics

Randy Kerns Randy Kerns Profile: Randy Kerns

This is the season for vendors of storage systems, servers, software and networking equipment to hold events to show off their products and discuss their strategies. EMC World and Symantec Vision recently wrapped up, with more shows to follow.

These events really do display the results of the hard work of talented people. Some products are organically developed while others result from the acquisitions of other companies and their talented people.

The messages are typically about new product capabilities, which may be iterative refinements to the current products or new products entirely. It is easy to get enamored with these – the result of both effective marketing and our insatiable desire to keep up with the next new thing.

But for IT organizations, the focus needs to be dialed back a bit to look at the requirements – both now and in the next three to five years. Some of the new things fit into this category while others will not. The importance is moderated by meeting specific requirements and putting a set of evaluation criteria in place with a decision process.

Vendor materials –marketing collaterals, specification sheets, case studies, and other realistic information — are useful in providing a foundation for understanding the product or solution. There really is no substitute for having a detailed knowledge base to work from, however.

There are two elementary starting points when looking at new products or technologies to meet the needs of an IT operation. You must ask if there is a capability that can only be done by using this product/technology, and if there is an economic advantage with a particular solution. There are unique products that hold their place because of what they bring. But for other products, there are competitive alternatives and sometimes tradeoffs that bring more considerations into the decision.

Economics for storage systems, including evaluating storage efficiency of the individual elements (see an explanation of this at Evaluator Group’s web site), are the most important considerations besides the fundamental requirements of storing information and providing access. Economics include the gains from doing things faster or more efficiently. If they can be quantified, then the decisions can be made and defended with more confidence.

It is necessary at times to remember that the focus on making storage decisions needs to be rooted in sound judgment with the best information and counsel available. The value provided by the products and the new generations and new technology improve the choices.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm). 

May 18, 2011  1:01 PM

Quantum plans source-side dedupe for DXi, StorNext on appliance

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum is preparing to add source-side data deduplication to its DXi disk backup platform, which currently performs target-side dedupe. Adding source-side dedupe will help Quantum take on EMC, which sells separate products — Avamar and Data Domain – for source and target dedupe.

Quantum CEO Jon Gacek revealed details of the vendor’s product roadmap over the next year Tuesday during its quarterly earnings call. Along with source dedupe, Quantum will add a NAS interface to its DXi 6700 midrange backup system and enhance the DXi software’s capability for protecting data in virtualized environments. Gacek said Quantum is also planning to deliver its StorNext archiving software on appliances and upgrade its Scaler i6000 enterprise tape library with improvements for archiving, high availability and security.

After the earnings call, Gacek disclosed a little more about Quantum’s product plans to StorageSoup. He said the DXi source-based dedupe would consist of client software running on servers that would dedupe data over the wire to improve performance and require less bandwidth. “We have a competitor that sells that as two products,” Gacek said, referring to EMC. “We’ll sell it as an integrated solution. One product is better than two.”

Quantum’s source-side dedupe follows the EMC Avamar model. While EMC recently improved Avamar’s performance when used with its Data Domain target dedupe, customers still must buy both products to get source and target dedupe.

The inclusion of source and target dedupe in one product is not unique — most major backup software applications support both. But Quantum is most focused on competing with Data Domain, the giant in the disk backup market.

Quantum still lacks global deduplication, which dedupes across multiple nodes as if they were one node. Data Domain added that feature across two nodes last year with its Global Deduplication Array.

The midrange DXi 6700 is currently a Fibre Channel virtual tape library (VTL) interface device. Quantum will add multiprotocol support, just as it has for its enterprise DXi 8500 system that launched last year. The DXi 6700 will support VTL, NFS, CIFS, and Symantec OpenStorage (OST) interfaces. Quantum’s other midrange and SMB DXi devices are NAS-only.

With StorNext, Quantum is following the path Symantec recently set by offering its FileStor software on an appliance. But Gacek said Quantum will have several appliance choices for different markets and use cases. He said the goal is to make StorNext easier to implement than it is now as a software sale that requires customers to set up their hardware based on their workloads. “We’ll flavor the appliances to go after different types of customers,” Gacek said.

Gacek, who replaced Rick Belluzzo as Quantum CEO April 4, will also revamp the company’s sales force to assign sales teams based on customers’ size and industry instead of by geography and specific product.

Gacek said Quantum’s goal is to provide alternatives for customers and channel partners to EMC’s Data Domain disk backup and Oracle/Sun tape libraries.

EMC sold Quantum’s DXi software before acquiring Data Domain, and Quantum is still looking to recover from the revenue it lost when EMC dropped its OEM deal.

Quantum grew its branded disk and software revenue 38% to $113 million for the fiscal year that ended in March, yet its overall revenue of $672 million fell one percent because of the loss of OEM revenue. Its $165 million in sales last quarter was slightly up over the previous year, however.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: