Storage Soup


May 18, 2011  1:01 PM

Quantum plans source-side dedupe for DXi, StorNext on appliance

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum is preparing to add source-side data deduplication to its DXi disk backup platform, which currently performs target-side dedupe. Adding source-side dedupe will help Quantum take on EMC, which sells separate products — Avamar and Data Domain – for source and target dedupe.

Quantum CEO Jon Gacek revealed details of the vendor’s product roadmap over the next year Tuesday during its quarterly earnings call. Along with source dedupe, Quantum will add a NAS interface to its DXi 6700 midrange backup system and enhance the DXi software’s capability for protecting data in virtualized environments. Gacek said Quantum is also planning to deliver its StorNext archiving software on appliances and upgrade its Scaler i6000 enterprise tape library with improvements for archiving, high availability and security.

After the earnings call, Gacek disclosed a little more about Quantum’s product plans to StorageSoup. He said the DXi source-based dedupe would consist of client software running on servers that would dedupe data over the wire to improve performance and require less bandwidth. “We have a competitor that sells that as two products,” Gacek said, referring to EMC. “We’ll sell it as an integrated solution. One product is better than two.”

Quantum’s source-side dedupe follows the EMC Avamar model. While EMC recently improved Avamar’s performance when used with its Data Domain target dedupe, customers still must buy both products to get source and target dedupe.

The inclusion of source and target dedupe in one product is not unique — most major backup software applications support both. But Quantum is most focused on competing with Data Domain, the giant in the disk backup market.

Quantum still lacks global deduplication, which dedupes across multiple nodes as if they were one node. Data Domain added that feature across two nodes last year with its Global Deduplication Array.

The midrange DXi 6700 is currently a Fibre Channel virtual tape library (VTL) interface device. Quantum will add multiprotocol support, just as it has for its enterprise DXi 8500 system that launched last year. The DXi 6700 will support VTL, NFS, CIFS, and Symantec OpenStorage (OST) interfaces. Quantum’s other midrange and SMB DXi devices are NAS-only.

With StorNext, Quantum is following the path Symantec recently set by offering its FileStor software on an appliance. But Gacek said Quantum will have several appliance choices for different markets and use cases. He said the goal is to make StorNext easier to implement than it is now as a software sale that requires customers to set up their hardware based on their workloads. “We’ll flavor the appliances to go after different types of customers,” Gacek said.

Gacek, who replaced Rick Belluzzo as Quantum CEO April 4, will also revamp the company’s sales force to assign sales teams based on customers’ size and industry instead of by geography and specific product.

Gacek said Quantum’s goal is to provide alternatives for customers and channel partners to EMC’s Data Domain disk backup and Oracle/Sun tape libraries.

EMC sold Quantum’s DXi software before acquiring Data Domain, and Quantum is still looking to recover from the revenue it lost when EMC dropped its OEM deal.

Quantum grew its branded disk and software revenue 38% to $113 million for the fiscal year that ended in March, yet its overall revenue of $672 million fell one percent because of the loss of OEM revenue. Its $165 million in sales last quarter was slightly up over the previous year, however.

May 16, 2011  7:32 PM

SanDisk acquires Pliant to tackle enterprise SSD market

Dave Raffo Dave Raffo Profile: Dave Raffo

It’s no secret that there are more solid state drive (SSD) vendors in the market today than can possibly survive, and there is bound to be consolidation through acquisition and some companies folding. One step toward consolidation came today when SanDisk acquired startup Pliant Technology for $327 million, giving SanDisk entry into the enterprise market and providing more resources for development of Pliant’s technology.

SanDisk makes NAND flash products for the retail and consumer markets, mostly smart phones, digital cameras and tablets. Pliant makes enterprise multi-level cell (MLC) and single-level cell (SLC) SAS interface SSD drives, including those used by OEM partners Dell, LSI and Teradata in storage systems and servers. There is no overlap in products between SanDisk and Pliant.

SanDisk’s Chief Strategy Officer Sumit Sadana said the vendor took a look at all the enterprise flash providers before acquiring Pliant because of “the technology that Pliant brings to the table, its credibility with tier one storage OEM customers, and its ramping revenue. Pliant is a leader in MLC-based flash, and that allows customers to gain access to this technology in the enterprise space at a lower cost than SLC. We think that’s the winning approach for the future.”

Sadana said SanDisk will continue to sell Pliant’s SLC flash, but will concentrate on MLC. “We’re not opposed to SLC-based solutions, but we will be heavily driving MLC to dramatically increase the presence of flash in the enterprise,” he said.

He said SanDisk will also continue Pliant’s strategy of selling through storage OEMs. Dell sells Pliant SSDs in its PowerVault storage arrays and PowerEdge servers. Pliant SSDs are also used in storage enclosures sold by LSI and Teradata’s data warehouse systems. Pliant has also been pursuing OEM deals with other major storage vendors.

Sadana said most of Pliant’s 80 employees will join SanDisk, including its founders – president Mike Chenery, chief architect Doug Prins and CTO Aaron Olbrich. Pliant CEO Richard Wilmer, who joined the company in March, will stay on through the transition period but Sadana said Pliant VP of marketing Greg Goelz will run the enterprise business and report to SanDisk VP Sanjay Mehrotra.

Goelz said Pliant, which had $57 million in funding, first approached SanDisk about a strategic relationship regarding SanDisk’s NAND chip. “We were also looking at raising money to fuel 2011 growth,” he said. “They looked at us and said, ‘We want to be in the enterprise, you guys are doing well …’”

He said Pliant did not have to be acquired to survive, but it can benefit from being part of a larger established vendor.

“SanDisk wants to be in the enterprise space and we know one or two things about the enterprise space,” Goelz said. “They’re a technology leader in NAND, and it’s a good natural fit.”

Jim Handy, SSD analyst at Objective Analysis, said he agrees that the two companies are a good fit because Pliant includes enterprise data protection features such as triple-redundant protected meta data and support for the T10 Data Integrity Field standard.

“It’s interesting that SanDisk wasn’t paying the slightest bit of attention to the enterprise before,” he said. “And Pliant is a serious player in the enterprise. Pliant is a company that understands enterprise storage rather than a company that understands flash memory. Most companies trying to get into flash memory don’t understand things like end-to-end data protection. That’s completely alien to people in the flash memory market.”

Handy said SanDisk’s challenge is to adopt an enterprise business model. “It’s a consumer oriented company,” he said. “This will be a new deal for them, not just because of the technology but also because of the sales challenges and the new approach to selling.”


May 13, 2011  6:02 PM

EMC World notebook

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS, Nev. — Notes, quotes and anecdotes from EMC World 2011:

All of EMC’s talk about the cloud this week brought up the inevitable questions about public trust in the cloud, especially after high-profile outages like the one suffered by Amazon EC2 a few weeks ago.
EMC CEO Joe Tucci said EMC has already solved the kinds of problems that caused the Amazon crash and other cloud glitches.

“Customers have to be able to recover their data quickly, and that’s what EMC does for a living,” Tucci said. “The Amazon issue had to do with the way data is recovered. Obviously we’ve been doing this for a long time. It took Amazon a long time to recreate lost data and some of that data won’t be recovered. We keep extra copies. If you lose it, we say, ‘OK, it’s lost, but we have an extra copy.’ We also give the option to encrypt data on storage, so if data gets stolen, you have encrypted data.”

Support claims rankle some customers

EMC held a Big Data Summit during the show, consisting of about 30 Isilon customers. The customers were asked what EMC can do better and while most offered suggestions for new product features, a few raised customer support issues. One long-time EMC customer who purchased Isilon storage after EMC acquired the NAS vendor in January took issue with EMC president Pat Gelsinger’s comments that EMC support is consistently among the best in the industry.

“Your support is ugly and as you make acquisitions, it gets uglier,” said the customer, an information services director at a telephone company. “That’s a concern I have with Isilon. I like what I hear from [other users] that you don’t have to touch it for six months, but the day I touch it I need support.”

He talked about spending eight hours on the phone with support once for a Clariion problem. He said he spoke with EMC’s support center in Ireland from 2 a.m. to 8 a.m., and then was moved to another call center before the problem was fixed. “A problem that should’ve taken 20 minutes to fix took eight hours,” he said. “Around 8:15, my EMC salesman called and said he wanted to take us to dinner. I said, ‘You can take us to dinner, but bring a truck and take this thing out of here.’”

Gelsinger was gone by then, leaving EMC global marketing CTO Chuck Hollis to take the brunt of the complaints. “When I hear stories like that, I want to cry,” Hollis told him.

Another customer said she found VMware support lacking for customers using EMC storage, even though EMC is the majority owner of VMware. Hollis pointed out VMware is a separately run company, but said EMC recently spent $60 million to hire VMware specialists inside EMC.

Symmetrix sets the table for FCoE

Fibre Channel over Etherenet (FCoE) support for EMC’s VMAX seems like a minor enhancement now, but EMC’s chief strategy officer for enterprise storage Barry Burke said he expects FCoE to become the dominant protocol over time. “I think at the end of the day, FCoE wins out,” he said. “It just doesn’t happen overnight. A lot of customers ask about it. The problem is, there aren’t a lot of storage targets that support it. Now we have FCoE support across the board.”

EMC is expected to add support for SAS drives to VMAX later this year, but Burke said “there’s not a rush” to phase out Fibre Channel drives because there is still customer interest. He said EMC has also advised its Symmetrix DMX customers not to expect any software upgrades as all of its development is going into the newer VMAX platform.

Centera’s still alive

EMC has made a lot of product noise this year with its massive January launch, a February investors event and this week’s EMC World. But its object-based Centera archive system hasn’t had any updates, and barely a word was spoken about it during those three events. That leads me to believe that Centera is either a perfect product that needs no further development or it’s about to be put to rest.

Neither is the case, according to Jon Martin, director of product management for EMC’s Cloud Infrastructure Group.

“Despite what you might hear from our competitors, Centera is not end of life,” Martin said. “In the second half of the year we’ll have a new release addressing some customer requests for features.”

Martin did admit Centera is “mature technology – not a lot of revolutionary features we can add” but said it still has a role despite EMC’s newer and shinier Atmos object-based cloud storage platform. He said Centera has advanced compliance and data retention capabilities that Atmos is not built to address.

Will big data, SSDs push primary dedupe?

Permabit CEO Tom Cook was among the attendees at EMC World, and said the big data theme made him optimistic about his company’s Albireo primary data deduplication technology. “There’s an opportunity to Data Domain this space,” he said, referring to the ability to do for primary data what Data Domain has done for backups.

Cook said besides helping to offset customers’ rapid data growth, primary dedupe can help make solid state drives (SSDs) more economical by reducing the amount of data that goes on them.

Cook also said his OEM deals with BlueArc and Xiotech should result in those vendors delivering products with Albireo this year. He said it was Permabit’s decision to end its deal with LSI after NetApp acquired LSI’s Engenio storage division because its other OEM partners consider NetApp a major competitor.

Rainfinity Redux

Remember Rainfinity? EMC acquired the file virtualization vendor in 2005 only to phase out its product line. Now Rainfinity technology has resurfaced in the Cloud Tiering Appliance EMC launched this week. The CTA migrates files from storage devices to the cloud, from hardware of EMC competitors onto EMC storage or from one EMC system to another.

EMC to do more with Hadoop

EMC’s support of Hadoop on its Greenplum appliances is likely the first step in the storage vendor’s deeper involvement with the Hadoop open source community. “We have work to do to establish ourselves as a credible player in the Hadoop community,” EMC chief marketing officer Jeremy Burton said. “We have to contribute.”

EMC executives say they can add technology to provide fault tolerance, mirroring and high availability to Hadoop. The work with Hadoop will be done through the Greenplum team, Burton said.


May 11, 2011  12:04 PM

Free storage features

Randy Kerns Randy Kerns Profile: Randy Kerns

For years, storage systems have evolved with the addition of high value features. The features have become differentiating characteristics between the storage systems in many cases.

They’ve also become in many cases requirements for any sophisticated storage array.

These include:
snapshot point-in-time copies,
remote replication with synchronous and asynchronous transfers and support for consistency groups,
thin provisioning,
data compression,
tiering across multiple storage device types,
• caching with multiple levels,
• cloning of volumes,
• And a number of others not as prevalent across different storage systems.

Most of these features have traditionally been extra cost items in storage systems. The costs are normally additional licensing charges, either per storage system or capacity-based. The vendor justifies the extra charges because of the effort to develop the feature and the additional costs incurred for the support. With this method, the customer does not pay for unused features.

However, there is a change underway for storage vendors to include some of these features in the base product as part of the base price. This is being done as a competitive issue. When buying features a la carte, customers often feel charges are just being piled on. The additional charges can raise the price significantly, and it is frustrating to customers. Vendors who include the features in the base price argue that they deliver more value that way and the single price has become a differentiator for them.

Interestingly, I have run into customers who have asked if they could get a price break if they didn’t use one of the included features. I attribute this to the conditioning that they have to pay extra for features. I heard a salesman respond that a customer who buys a car wouldn’t expect to pay less if that person didn’t intend to use the back seat. These features are just included in the base product now.

Whether this becomes the standard for storage systems is not clear but certainly that is the direction vendors are heading today. There are still high-value features that are extra charge but there are already a number of capabilities of the system included that were once add-on features. Over time, the competitive pressures will probably continue to drive more features to the base product. While this may be good for the IT customer, it cuts into a vendor’s profitability. But, it is probably inevitable. We spend time evaluating the features of the different storage systems and write about them at the Evaluator Group web site.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm). 


May 9, 2011  6:40 PM

EMC flashes lofty SSD plans *UPDATED*

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS, Nev. – The first big storage product launch at EMC World is “Project Lightning,” code-name for a PCIe flash-based server cache device to accelerate performance of its storage systems. It was one of several solid state drive (SSD) products EMC previewed Monday at the show. Project Lightning is scheduled to ship in the second half of the year.

The vendor said it would qualify multi-layer cell (MLC) SSDs for its storage arrays this year, and deliver all-Flash Symmetrix enterprise systems by the end of next month and an all-flash VNX unified system this year. EMC also created a flash business unit to develop new technologies and manage partner and supplier relationships.

EMC began shipping single-layer cell (SLC) flash on its Symmetrix systems in 2008, and claims it has shipped nearly 14 PB of flash capacity in storage arrays since last year. The vendor said half of its VMAX and VNX systems orders now include flash capacity.

EMC has not shipped PCIe flash or MLC, which is lower-cost flash than SLC but comes with performance and reliability tradeoffs. EMC rival NetApp ships a PCIe based Flash Cache product in its storage arrays that is the main focus of its SSD strategy.

Storage vendors have increasingly turned to MLC to try and spur SSD sales, which still meet resistance because of the price.

“EMC will enter the server flash business,” EMC president Pat Gelsinger said of Project Lightning during his keynote address today. “We’ll take flash and put in in the server so it acts as server DAS or server cache.”

During a media briefing after his keynote, Gelsinger would not say if EMC is working with Intel on Project Lightning. According to leaked Intel roadmaps, the chip vendor is working on PCIe flash products. “This is an EMC product,” Gelsinger said. “Obviously Intel is one of our most core technology partners, but Intel hasn’t announced its plans yet so it would be premature to announce any partnerships.

Gelsinger also said EMC would integrate its FAST automated tiering software with the PCIe card to optimize data placement.

 Gelsinger pointed out Project Lightning is server-side flash, while NetApp’s Flash Cache is in the storage. “We will offer flash in the server as DAS or cache and build a connection to extend flash intelligence into the storage array or server,” he said. “Flash Cache is just storage-side cache, it’s largely what FAST already does.”

EMC rolled out a few other products today:

The Isilon NL-108, the biggest version of its nearline NAS platform, scales to 108 TB with 36 3 TB drives in one system and can scale to 15.5 PB with a 44-node cluster. Isilon also rolled out SmartLock, a policy-based retention software designed to prevent accidental deletion of data.

Atmos 2.0 has improved performance for object ingestion, GeoParity software for moving data across Atmos deployments, GeoDrive software that lets Windows users move data t the cloud and an Amazon S3 interoperability API that lets S3 customers use Atmos.

Three GreenplumHD appliances integrate Hadoop open source software for distributed applications with large data sets. The appliances run Hadoop as a virtual appliance or on a Greenplum hardware appliance.


May 9, 2011  4:56 PM

EMC’s Tucci: It’s all about the cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

The early message from EMC executives at EMC World is, “If you’re not using a private cloud yet, you’re late.”

That’s how EMC chief marketing officer Jeremy Burton opened the first day’s keynote session.

“If you have not begun the journey to the private cloud, what are you doing? You’re late,” said Burton, adding “cloud meets big data is what EMC World is all about.”

EMC CEO Joe Tucci declared “the cloud wave is the biggest and most disruptive change we’ve seen yet in IT” during the early moments of his keynote.

And just in case anybody gets the idea that EMC has demoted another of its lynchpin technologies, Tucci said “virtualization is the key to the cloud. This is the year when most, if not all, mission critical applications get virtualized and run on cloud topologies. This will be the year when all IT professionals will understand the opportunities [presented by the cloud].”

Tucci also predicted a hybrid cloud combining private and public clouds will become the “de facto enterprise model” because it offers, among other things, “incredible levels of agility.”

Tucci gave a brief preview of EMC’s product launches at the show, including a new VPlex active-active storage system with asynchronous mirroring, some Isilon scaleout NAS tweaks, Hadoop support for Greenplum appliances and something he called Project Lightning. Tucci called Project Lighting “a top secret” and said EMC president Pat Gelsinger would address it during his keynote later today. Sources say Project Lighting involves flash solid state storage.


May 9, 2011  3:53 PM

NetApp closes Engenio deal, launches three storage sytems

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp closed its $480 million acquisition of LSI’s Engenio storage system division today and hit the ground running with three new systems based on Engenio technology.

NetApp rechristened Engenio storage as the NetApp E-Series aimed at organizations with large data sets and high performance block data needs. It launched the E5400 midrange system for OEMs as well as a Hadoop storage system for analytics and a full-motion video system for battlefield intelligence.

NetApp also said it has commitments from two of Engenio’s largest OEM customers – IBM and Terradata Corp. – to transfer those OEM relationships to NetApp.

The E5400 is a 4u 60-drive dual controller system that supports 120 TB raw capacity and 6 GBps of sustained throughput. It uses 3.-5-inch 2 TB nearline SAS drives. The E5400 is part of the family that IBM sells as its DS5000 through an OEM deal it had with LSI.

NetApp’s E-Series consists of the E2600 entry level storage, the E5400 midrange platform and the E7900 for high performance computing (HPC).

NetApp’s Hadoop system is aimed at the big analytics segment of the Big Data that vendors talk about so much today but can still confuse their customers. NetApp divides Big Data into Big Content and Big Analytics. The analytics segment deals with making complex queries on data, much of it in structured data warehouses. The Hadoop storage system is built on the NetApp E2600, which Engenio was selling before the NetApp acquisition. The E2600 has a base configuration of 16 nodes to 32 nodes in a shelf.

Hadoop is a free, Java-based programming framework that supports the processing of large distributed data sets. The E2600 is designed to help customers build Hadoop systems quickly to ingest large data blocks. “Hadoop tends to be the Grand Central Station of enterprise data warhehousing these days,” said Val Bercovici of NetApp’s CTO office.

Bercovici said NetApp chose the Engenio platform rather than NetApp’s FAS series because the Engenio systems are block-based and designed more for streaming I/O to low-cost SATA drives, making them a better fit for Hadoop use cases.

NetApp’s Hadoop launch comes on the same day that NetApp rival EMC rolled out its GreenplumHD Data Computing Appliance with Hadoop at EMC World.

The NetApp full motion video system is designed for government intelligence agencies that need to store data images from unnamed drones and satellites on the battlefield.

NetApp welcomes most Engenio employees, but not Permabit dedupe

NetApp VP of product and solutions marketing Chris Cummings said most of Engenio’s 1,100 employees will join NetApp but didn’t have an exact number.

It doesn’t appear that NetApp will carry over Engenio’s fledgling OEM deal with Permabit for primary data deduplication. Cummings said that deal was struck when LSI was looking to make its systems more focused for general IT use but NetApp intends to focus Engenio storage on customers with large bandwidth needs. He said NetApp will likely leave it to OEM partners to add data reduction and sell Engenio as part of solutions behind NetApp’s V-Series, which already provide primary dedupe.


May 5, 2011  1:10 PM

Archiving’s two primary use cases

Randy Kerns Randy Kerns Profile: Randy Kerns

Data archiving is often incorrectly assumed to be an extension of backup. That’s mainly because earlier technology limitations necessitated the use of backup software to store data on devices.

Storing information that is not frequently accessed but must be retained is archiving. An archiving system’s defining characteristics are speed of access, the cost of storing the information over decades, and the ability to access data in context by the application.

There are two primary archiving use cases. Many software programs are available to move and manage the information on an archive system, with features that take into account those two use cases.

The first of archiving use case is to store information that has reached a known point in a process or business. The known point could be when a project finishes, but the project information must be retained for a future requirement. An example of this could be a construction project where materials such as schedules, product information, and sub-contracts, are required to be retained after the project finishes. The information still needs to be available and may be accessed intermittently but it has a low probability of access.

There are many examples for this first archiving usage and they exhibit a similar operational characteristic: the information, typically in the form of a set of files, needs to be moved to the archiving system and protected with multiple copies when archived. Maintaining indexes and controls for access over the information in addition to the movement to the archiving system are part of the characteristics of archiving software used in this case.

The second of the two primary archiving usages is to move data that is no longer active off high performance expensive storage. Moving the data to an archive system frees up space for critical data on the primary storage tier and lessens the need to purchase additional capacity. Protecting the data at the time of archive according to the data protection requirements also takes that data out of the regular backup process, reducing backup windows.

Sophisticated software lets organizations automate the selection and movement of the inactive data while providing seamless access to the information on the archive system.

Both use cases for archiving provide valid justification for archiving storage systems and software. More information on archiving can be found on the Evaluator Group site.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm). 


May 4, 2011  3:39 PM

Brocade, Emulex kick off 16-gig FC hype

Dave Raffo Dave Raffo Profile: Dave Raffo

Ready or not, 16 Gbps Fibre Channel is coming.

Emulex and Brocade this week said they have 16-gig FC devices being qualified by storage and server vendors, although there doesn’t appear to be any screaming demand for organizations looking for more than 8-gig FC support. The aggressive move to 16-gig FC is a sign that mass adoption of Fibre Channel over Ethernet (FCoE) is still a long ways off and pure FC has plenty of life left.

Emulex got the ball rolling Monday with its XE201 I/O Controller, a converged adapter that supports 16-gig FC along with FCoE, 10-Gigabit Ethernet (GbE) and 40 GbE. Emulex will demo the XE201’s 16-gig capability at EMC World next week.

Brocade Tuesday rolled out what it calls the first end-to-end 16-gig FC platform of products. They include the DCX8510 Backbone SAN switch with up to 384 16-gig ports at line-rate speeds, the 6510 edge switch with 24 or 48 ports, and the 1860 Fabric Adapter that supports FC, Ethernet and FCoE. Brocade is also adding 16-gig FC support to its Network Advisor 11.1 unified management software and Brocade Fabric Manager 7.0.

Brocade said the new switches and software will be available this quarter. However, its OEM partners probably won’t complete qualifications before August. EMC and Hewlett-Packard are already qualifying the 16-gig products with other storage vendors to follow soon.

Emulex VP of marketing Shawn Walsh said he expects OEM certification for the second half of this year for Emulex’s 16-gig adapter.

It’s no surprise that Brocade is pushing the faster FC. Brocade has always been less bullish on FCoE than its FC switch rival Cisco, and Brocade picked up significant market share gains by beating Cisco to 4 Gbps and 8 Gbps FC gear. During its tech summit day Tuesday, Brocade execs said the market agrees with their take on FC and FCoE.

“FCoE adoption has been modest,” said Jason Nolet, VP of Brodcade’s data center and enterprise networking group.

“Our customers say they want to stay with Fibre channel,” Brocade CTO Dave Stevens added.

Emulex’s Walsh agreed with that. “Ten-gig [Ethernet] adoption is happening fast, but there’s still discrete networks,” he said. “Customers are not going to throw away what they have today. One of the big questions we get is, ‘What is Emulex’s commitment to 16-gig Fibre Channel?’”

It’s almost certain that more vendors will have 16-gig FC products by the Fibre Channel Industry Association (FCIA)’s October 16-gig plugfest.

If history is any indication, Cisco will trail Brocade by from six months to a year with 16 Gbps FC. Emulex’s main adapter rival QLogic isn’t commenting on its 16-gig roadmap but is expected to support it this year.

Brocade execs point to virtualization and the cloud as drivers of the faster technology. However, there will be a price premium to move from 8-gig to 16-gig. That was also the case with the move from 4-gig to 8-gig, and that transition was slower than the moves from 1-gig to 2-gig and from 2-gig to 4-gig when there was no price hike for the higher bandwidth. While the first two transitions took about two years each for the higher bandwidth to become dominant, the move to 8-gig took about three years.

In a blog post this week, Wikibon senior analyst Stuart Miniman advised organizations to pursue 16-gig FC and converged networks on their internal schedules rather than according to vendor roadmaps.

“Most users can take a slow and deliberate approach to the adoption of new generations of speeds,” Miniman wrote. “ … customers can support both FC and Ethernet and consider the migration on internal schedules rather than on the pace that the vendor community may want to push or pull. For equipment refresh cycles that start in 2012 or later, consider looking for adapters that can support the latest of both FC and Ethernet.”


May 3, 2011  5:42 PM

HP prepares new EVA, vows to keep XP *Update*

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard today said it will launch its next-generation EVA midrange storage system in June, and denied that it will stop selling its Hitachi-manufactured P9500 XP enterprise platform.

For now, HP isn’t giving much detail on the P6000 EVA except to say it will have a 6 Gbps SAS back end, 2.5-inch SAS drives and 8 Gbps Fibre Channel connectivity. The vendor is offering an early access program to customers ahead of the official launch at the HP Discover user show June 6-10.

“We want to let folks know where we stand,” said Craig Nunes, marketing director for HP StorageWorks. “A quarter ago, there was a lot of speculation about when it [the next EVA] is going to come. We’re trying to be as proactive as possible.”

HP is also sending a message that it will continue to develop the EVA line as well as continue the P9500 that comes from an OEM deal with Hitachi Limited.

There has been speculation in the industry that HP would drop either the XP, EVA or both product platforms since it acquired 3PAR for $2.35 billion last year. HP executives have maintained they will keep both the XP and EVA, but StorageNewsletter.com posted an item Monday citing unnamed sources saying HP would stop selling the P9500 XP and replace it with high-end 3PAR arrays.

The Storage Newsletter story drew vehement denials from HP, with HP StorageWorks VP of marketing, strategy and operations Tom Joyce telling the newsletter “HP is in no way discontinuing the XP business relationship with Hitachi, Ltd. of Japan. The XP, and its newly named P9500 successor, are very successful mission-critical storage products for HP.”

HP’s storage blogger Calvin Zito added in a blog post Monday, “I saw a story on a small storage news website today claiming that HP would no longer OEM the XP Disk Array from HDS [Hitachi Data Systems].

“The story is wrong. Period.”

Joyce told StorageSoup that the industry speculation has created confusion with HP EVA and XP customers about the vendor’s roadmap.

“When you do something as publicly visible as making the investment HP made in 3PAR, it begs the question, ‘What does this mean to existing products?’” he said. “The thing about a free press is people are free to write what they want. But we’ve been consistent since we bought 3PAR that we would introduce a new EVA in the first half of 2011. We said the P9500 will stay a critical part of our product line. Over a period of time customers will like to have alternatives. Some customers will say ‘I’ll use 3PAR for something I used to use XP for,’ but 3PAR will never replace some of the things XP does well. We will never add mainframe connectivity for 3PAR.”

Joyce said EVA’s selling point is simplicity compared to other FC storage systems. “Folks buy EVA because of ease of use right out of the box,” he said. “It’s simple to use and you can run it with a lot less people.”

UPDATE: According to documents uncovered by SearchStorage ANZ’s Simon Sharwood, HP will deliver a P6300 and P6500 EVA. The P6500 will be a higher end version with faster processors, more cache, greater maximum capacity and so on. But the more interesting parts of the upgrade are the software and management features – mainly the reservationless dedicate-on-write thin provisioning that HP has in its 3PAR and LeftHand platforms. The new EVA will also have dynamic LUN migration, and new remote replication and clustering capabilities.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: