Storage Soup


November 22, 2011  8:45 AM

Brocade: 16-gig Fibre Channel switches moving fast

Dave Raffo Dave Raffo Profile: Dave Raffo

Brocade executives say the 16 Gbps Fibre Channel (FC) switches they rolled out earlier this year have been an immediate hit in the market, with customers upgrading at a faster pace than they did with 8 Gbps and 4 Gbps switches.

During the vendor’s earnings results call Monday evening, Brocade reported nearly $40 million in revenue from 16-gig directors and switches in the first full quarter of availability. Brocade’s total FC revenue was approximately $303 million last quarter. The total FC revenue grew about 10% from the previous quarter, although it was down about 4% from last year. Brocade execs pointed out that all of the major storage vendors have qualified their 16-gig FC gear while rival Cisco has yet to support 16-gig FC.

Brocade execs said server virtualization and PCIe-based flash are pushing customers to the higher performing FC. They also say customers are sticking to FC instead of moving to Fibre Channel over Ethernet (FCoE).

“We saw a faster-than-expected ramp of our 16-gig portfolio of products,” Brocade CEO Mike Klayko said on the call. “This is perhaps the fastest and smoothest qualification process [with OEM partners] of any new product portfolio among our OEMs.”

Jason Nolet, Brocade’s VP of data center and enterprise networking, said FC remains the “premier storage networking technology for mission-critical apps.” He said Brocade is selling FCoE in top-of-rack switches but there is “almost no take-up” in end-to-end FCoE implementations. “Because of that, Fibre Channel continues to enjoy that kind of premier place in the hierarchy of technologies for storage networking,” he said.

The Brocade executives also played up the monitoring and diagnostics built into their 16-gig switches, suggesting the vendor will make more of a push into this area. Brocade customers have turned to third-party tools for this, such as Virtual Instruments’ Performance Probe. But Virtual Instruments CEO John Thompson recently complained that Brocade has been telling its customers not to use Virtual Instruments products despite having a cooperative marketing relationship in the past. The management aspect of Brocade switches will be worth watching in the coming months.

November 17, 2011  10:17 AM

Thailand floods have NetApp treading water

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp executives admit they are concerned about how the hard drive shortage caused by floods in Thailand will affect their sales over the next few months. Those concerns prompted the vendor to lower its forecast for next quarter and brought negative reactions on Wall Street.

Of course, NetApp isn’t the only company that will feel the sting of a hard-drive shortage. All storage companies will suffer from a shortage of drives, and customers will suffer too as prices go up. But NetApp also has other issues. Its revenue from last quarter was slightly below expectations, and NetApp is facing increased competition from rival EMC in clustered NAS and unified storage. There is also a potentially sticky matter concerning sales of NetApp storage into Syria for use of an Internet-surveillance system – a situation that NetApp claims it had nothing to do with.

NetApp Wednesday reported revenue of $1.507 billion last quarter, slightly below analysts’ consensus expectation of $1.54 billion. NetApp said its sales were lower than expected in nine of its 46 largest accounts, causing the miss. “The rest of the business was generally positive,” NetApp CEO Tom Georgens said, although he admitted the recent rollout of the vendor’s FAS2240 system prompted customers to hold off on buying entry-level FAS2000 products.

At least four Wall Street analysts downgraded NetApp’s stock today following its results and comments on next quarter, and the price of NetApp’s shares dropped 9% in pre-market trading. NetApp executives said they have adequate hard drive supply through the end of the year, but they have a difficult time predicting what will happen after that.

“The impact of the Thailand flooding can potentially be the biggest swing factor [over the next six months],” Georgens said. “Although enterprise class drives are considered to be the least impacted, we still anticipate some amount of supply and pricing complexity. We have all heard the predictions of the industry analyst and the drive vendors themselves. Some of the information is conflicting, and most of it is changing daily in regards to scope and ultimate impact.”

On the positive side, NetApp said its FAS6000 enterprise platform revenue grew 100% year-over-year, its midrange FAS3000 series increased 34%, and the E-Series acquired from LSI Engenio earlier this year increased 11% from the previous quarter.

Georgens said NetApp’s solid state Flash Cache product is becoming common on high-end arrays, and hinted that NetApp would add data management software for caching data on server-based PCIe flash cards. “I think you’ll continue to see innovation on the flash side from NetApp, both inside and outside the array,” he said.

Still, NetApp executives were forced to deal with unpleasant topics during their earnings call:

EMC Isilon
Isilon’s scale-out NAS platform sales have spiked this year since EMC acquired Isilon and put its massive sales force behind it. But Georgens said NetApp’s new DataOnTap 8.1 software allows greater cluster capability for FAS storage, and its E-Series (from Engenio) and StorageGrid (from Bycast) object storage platform also increase its “big data” value against Isilon.

EMC’s VNX
Georgens said NetApp’s problems haven’t been caused by EMC’s new VNX unified storage series, despite EMC’s push of its SMB VNXe product into the channel. “The VNX has not caused much of a change in dynamics in many accounts,” he said. “The VNXe in terms of EMC’s channel incentives is something that we’ve seen more of. That’s been the strongest part of our portfolio, so I don’t think they’ve slowed us down much. Nevertheless … that’s been something that’s generated more discussion within NetApp than actually the VNX itself. I think the VNX itself has been inflicting more pain on Dell than on NetApp.”

Sales to Syria
Georgens ended the call by addressing a news story reported by Bloomberg News last week that U.S. congressmen are calling for an investigation into the roles played by NetApp and BlueCoat involving sales of their products into Syria. According to Bloomberg, NetApp’s products appeared in blueprints for an Internet surveillance system being implemented in Syria by an Italian company. Georgens said NetApp did not support the sale of its storage to Syria and “we are just as disturbed that his product is in a banned country as anybody else.” He also pointed out that NetApp only sells storage, not the applications that Syria could use to intercept e-mails.

Georgens added that NetApp is helping the U.S. government in its investigation. “I can tell you we did not actively seek out, we did not choose to sell to the Syrian government, and we’re not looking for a way to circumvent U.S. law to sell to the Syrian government,” he said. “We have no interest in providing product to a banned country. I just wanted to make sure that was clear.”


November 16, 2011  6:26 PM

Crossroads launches LTFS StrongBox

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Crossroads Systems this week said in December it will start shipping its NAS-based StrongBox data vault that supports Linear Tape File System (LTFS) to provide disk-like access capabilities across a back-end LTO-5 tape library for archiving.

StrongBox will be among the first products to take advantage of LTFS, which allows tape to act like disk so that users can retrieve data from LTO-5 cartridges by searching a file system directory. As a disk-based device, StrongBox ingests data and stores it on internal disk before archiving it to an external tape library. With LTFS, users no longer have to go through the cumbersome process of manually searching for data from tape.

“LTFS in itself is great, but how that technology is appropriated is what is important,” said Debasmita Roychowdhury, Crossroad Systems’ senior product manager for StrongBox. “With Strongbox, there is no dependency on backup and archiving applications. The tape behaves just like a file system.”

StongBox comes in two models. The T1 is a 1U server with 5.5 TB of capacity that supports 200 million files at a 160 MBps transfer rate over dual Gigabit Ethernet (GbE) ports. It can write data to LTO tape libraries or external disk arrays via dual 6 Gbps SAS ports. The T3 is a 3U device that can hold up to 14 TB of capacity and handle up to 5 billion files. It can input data at speeds up to 600 MBps over quad GbE ports, and write data to back-end tape libraries or disk arrays via four 6 Gbps SAS ports or four 8 Gbps Fibre Channel ports. Both models contain solid-state drives (SSDs) to backup the appliance configurations and a database to contain mapping file system data. Both versions support Windows, Linux, and Mac systems via CIFS or NFS network data protocols.

StrongBox allows IT managers to mount CIFS and NFS file shares, and the device provides a persistent view of all the files, whether stored on disk or tape. Data lands on the NAS system and it is stored onto disk for one hour after it was last modified. Thereafter, the files are read only, then policies are applied per file share, a hash is calculated for each file and then files are moved onto tape. For retrieval, data is pulled from tape onto the StrongBox and is sent to the application making the request.

LTFS allows tape to act like disk because it does media partitioning. One partition has a self-contained hierarchical file system index and a second partition holds the content. When a tape is loaded into a drive, the index and contents can be viewed by a browser or any application that has the tape attached to it. LTFS allows any computer to read data on an LTO-5 cartridge.

StrongBox has self-healing and monitoring capabilities that automatically detect media failures and degradation. If problems with the media are detected, it migrates data off the bad media to another non-disruptively. Future versions will have a dual-copy export policy, so that one tape can be shipped off for archiving, and a WAN-accelerated replication policy so that data can be replicated between two StrongBox systems.

Currently, StrongBox supports IBM and Hewlett-Packard tape libraries, and Crossroads is testing the product with libraries from Quantum and Spectra Logic. The T1 model is priced at $21,750 with 10 TB of capacity, while the T3 is priced at $30,700 with 10 Terabytes. To hold up to 5 billion files, it would cost another $4,560.


November 16, 2011  11:41 AM

Dell storage revenue tumbles after changes

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell CEO Michael Dell likes to refer to his company as a hot storage startup because it is overhauling its storage technology through acquisitions and internally developed products. And like most startups, Dell storage is experiencing growing pains. Right now, Dell storage is actually experiencing non-growing pains.

Like its overall results last quarter, Dell storage sales were below expectations. Year-over-year storage revenue dropped 15% to $460 million, mostly due to revenue lost because of its divorce from partner EMC. Dell-developed storage revenue increased 23% year over year to $388 million, but that was down from $393 million the previous quarter.

Dell executives point to improved margins gained from owning its storage IP following acquisitions of EqualLogic and Compellent, instead of selling EMC’s storage. But Michael Dell didn’t seem thrilled when discussing his company’s storage revenues during the earnings conference call Tuesday night.

“It wasn’t completely up to our expectations,” he said. “There’s some room for improvement there. Growth in Dell IP storage was 23% year over year and it’s now 84% of our overall storage business. We had good demand from Compellent, we launched a whole new product cycle in EqualLogic. There’s definitely more to do here and … we have put a lot of new people into the organization, and they’re becoming productive, and we still remain very optimistic about our ability to grow that business.”

According to a research note issued today by Aaron Rakers, Stifel Nicolaus Equity Research’s analyst for enterprise hardware, the overall storage market grew approximately 12% year over year for the quarter. In comparison, EMC storage revenue increased 16%, Hitachi Data Systems grew 24% and IBM was up 8% over last year.

Rakers wrote that he expected Dell’s storage revenue for the quarter to be around $542 million, and he believes “overlap between EquaLogic and Compellent has been a challenge.” He added that he assumes “muted EqualLogic revenue growth” for the quarter while Dell expanded its Compellent channel into 58 countries – up from 30 countries a year ago.


November 16, 2011  8:32 AM

Making storage simple isn’t easy

Randy Kerns Randy Kerns Profile: Randy Kerns

IT managers want storage systems that are simple to administer. They measure ease of installation in time and the number of steps it takes. Ongoing administration is viewed as a negative – “Just alert me when there is a problem and give me choices of what to do about it” is a familiar response I hear when talking to IT people.

The problem is, simplicity is hard to accomplish when making storage systems. To design a complex storage system, it is difficult to bake in automation in an intelligent fashion to make it “simple.”

Simplicity in storage has come in many forms, including the advanced GUI you see in products such as the IBM XIV and Storwize V7000 or the automation designed into EMC’s FAST VP tiering software. These products are complicated to develop and require an understanding not only of the storage system but of the dynamics encountered in an operational environment. Designing in simplicity can be expensive, too. It requires a substantial investment in engineering and the ongoing support infrastructure to deal with problems and incremental improvements.

But, simplicity in the storage system should seem natural, with overt signs of complexity. The best comment I’ve heard from an IT person was, “Why wasn’t it always done this way?”

In many customers’ minds, simplicity doesn’t translate to extra cost. A “simple” system should cost more because it is more expensive to produce, but people often think it should cost less because it is … well, simple.

A potential problem for vendors comes when they highlight specific characteristics of a product to differentiate it from competitors. This seems logical – you want to show why your system is different. Unfortunately, talking about the underlying details of a product and how simple it is are contradictory. If it is simple, the underlying details shouldn’t have to be explained. But if the vendor marketing team does not explain them, they may not be able to distinguish their product from others. This leads to confusion as well as marketing messages that are ignored or incorrectly received.

One way out of this for the vendors is to list their features and why they are different while — more importantly — giving the net effect of those differences. They should explain the value of features such as performance and reliability, and couple that with the simplicity message. Think of this type of message: Simple is good, complex is bad. Give the simplicity story and show the contributing elements to that. And for the underlying details that may be a differentiator, explain them in the bigger context where the effect is measured and independent of the simplicity that required a large investment.

It is too easy to focus on product details while hiding the real measures needed for a storage product decision. If product details are the only message, then maybe something else is lacking.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


November 15, 2011  2:58 PM

TwinStrata delivers cloud SANs

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

TwinStrata is extending the capabilities of its CloudArray gateway device, adding support for on-premise SAN, NAS , direct attached storage devices and private clouds.

TwinStrata launched CloudArray as an iSCSI virtual appliance in May 2010, and added a physical appliance later in the year. The gateway moves data off to public cloud providers. With CloudArray 3.0, TwinStrata is trying to appeal to customers who want to expand their SAN and private clouds.

“We are broadening our ecosystem of the private and public cloud, and also leveraging existing storage as a starting point,” TwinStrata CEO Nicos Vekiarides said. “We are enabling customers to create a hybrid configuration to combine existing assets.”

Vekiarides said by letting customers use CloudArray on existing storage, they can access their data from anywhere. He claims TwinStrata is enabling a Cloud SAN, with multi-tenancy and multi-site scalability along with local speed performance, data reduction, high availability, encryption, centralized disaster recovery and capacity management.

With CloudArray 3.0, TwinStrata also has automated its caching capability. TwinStrata’s appliance uses the storage on a JBOD or host RAID controller or array for cache. Previously, the cache capacity had to be manually configured.

TwinStrata also added support for Nirvanix’s Cloud Storage Network, Rackspace and OpenSpace private and public clouds.

Mike Kahn, managing director at The Clipper Group, said TwinStrata’s 3.0 release allows customers to “put a veil over existing storage so it can be used as one or more tiers of storage. And over time, they can move to a public or private cloud,” he said.


November 15, 2011  2:17 PM

Fusion-io braces for competition with bigger, faster PCIe flash device

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io has been one of the early successes in solid-state storage, turning its early lead in PCIe flash cards into an IPO this year after winning large deals with customers Facebook and Apple.

Now as competitors are popping up to challenge Fusion-io, the vendor is moving to make its products bigger and faster. Today Fusion-io doubled the capacity and improved performance of its ioDrive Octal flash-based accelerator card.

The 10 TB ioDrive Octal, aimed at data warehousing and supercomputing, includes eight 1.28 TB multi-layer cell (MLC) ioMemory modules in a double-wide PCIe device. Multiple cards can go into a server, packing 40 TB of flash capacity into a 4U server.

The 10TB Octal can handle more than 1.3 million IOPS with 6.7 GBps of bandwidth, according to Fusion-io. The 5.12 TB ioDrive Octal that began shipping two years ago supported 800,000 IOPS.

Fusion-io founder and chief marketing officer Rick White said feedback received from customers indicates they want as much capacity as possible in the Octal product. He said the Octal is used mainly for scale-up performance while Fusion-io’s single-card ioDrive and ioDrive Duo cards are for scale-out implementations.

White said Fusion-io’s early success comes from the vendor taking a fresh approach to driving performance and reducing latency in storage systems.

“We founded this company as a software company,” he said. “We couldn’t convince major flash memory companies to build a memory card based on flash, so we had to do it ourselves. We say it’s not about PCI Express, it’s about not thinking about this as a disk. Don’t think about it as storage, think about it as a memory tier.

”We decouple storage from capacity. The old way of scaling performance was to add spindles, then you stripe them, short stroke them, and add a layer of cache. The problem was, you had network latency. It wasn’t just about IOPS, it was how fast can I get a response? And does this play with my application? We were about decoupling performance and putting it into the server.”

He said the idea was to enable hundred-thousand dollar JBODs to perform as well as million-dollar SANs. Much of Fusion-io’s early success comes from convincing large companies such as Facebook and Apple to adopt its approach.

Traditional storage vendors moved into flash with solid-state drives (SSDs) on their storage arrays, but White said “all those SSDs had to go through a RAID controller on the PCIe bus. It’s about getting rid of SAS, SATA and all the storage protocols.”

The competition is paying attention. LSI Corp., STEC, Violin Memory, Texas Memory Systems, Micron, OCZ and Virident now have PCIe flash cards similar to Fusion-io’s,and EMC is preparing to ship its server-based PCIe flash Project Lightning product.

“The industry has followed us to PCIe, they’re following us to caching software, and the next step is to lose the storage protocols and think of us a new tier of memory,” White said.


November 14, 2011  7:51 PM

Symantec speeds up failover for Windows apps

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Symantec Corp. today launched Veritas Storage Foundation High Availability 6.0 for Windows, designed for faster failover of Windows server applications and recovery of Windows Server Hyper-V virtual machines for disaster recovery.

Veritas SFHA 6.0 consists of Veritas Storage Foundation 6.0 for Windows and Veritas Cluster Server 6.0 for Windows. Symantec claims Veritas SFHA 6.0 reduces the failover time from about 30 minutes to one minute for business critical server applications because the software now uses an Intelligent Monitoring Framework with asynchronous monitoring instead of the traditional Monitoring Framework with polling monitoring. The polling monitoring typically takes up most of the time during a failover process because an alert is sent without taking formal action. A failure is detected instantaneously with asynchronous monitoring and action is taken faster.

“The biggest chunk of times was the polling process,” said Jennifer Ellard, a senior manager in Symantec’s Storage and High Availability Group. “We changed the paradigm from polling-based to asynchronous monitoring, which allows us to take action faster. We plug into Windows so we can get the information instantaneously.”

Ellard said failover is up to 30 times faster because many of the storage reconnection steps have been automated. Previously, the storage disk group was only available to the target server and not the standby servers. When a failover occurred, all the data in the disk group had to be imported into the new nodes and the disks had to be re-scanned. The Veritas SFHA 6.0 for Windows software now allows multi-node access to a shared volume, eliminating the need to import disk groups to the new systems and re-scan disks.

Veritas SFHA 6.0 for Windows now supports Hyper-V virtual machines with a feature that automates the end-to-end disaster recovery process. The 6.0 version helps coordinate the process for virtual machines with Windows Server 2008 RS Failover Cluster to automate the detection of site-wide failures and the recovery steps such as storage replication control, VM site localization and DNS updates.

Ellard said 6.0 allows for disaster recovery across any distance. “Typically, you can failover up to 100 Kilometers in virtual environments,” Ellard said. “We enable customers to failover across any distance. We do this by asynchronous replications in Hyper-V environments.”

Veritas Storage Foundation 6.0 for Windows also allows live migrations, such as when administrators need to move VMs from server to server along with the associated storage. SF 6.0 supports Windows Server Hyper-V live migration, which does the actual movement of the VMs while the Symantec software handles the storage that is associated with the VMs. Ellard describes this as a “big customer request.”

Veritas SFHA 6.0 for Windows also includes Virtual Business Services (VBS) for recovery of applications across multiple business units, operating systems and storage platforms. VBS is designed to give administrators and business groups a coordinated method for automated high availability and recovery of multiple, interdependent applications, plus all the supporting physical and virtual technologies.

Veritas SFHA 6.0 will be generally available on Dec. 5


November 11, 2011  3:36 PM

DataDirect Networks re-architects HPC storage

Dave Raffo Dave Raffo Profile: Dave Raffo

DataDirect Networks has added performance to its top-end high-performance computing (HPC) platform.

DDN this week launched its SFA12K series, which will replace the SFA10K product that the vendor has had success selling to HPC shops.

DDN CEO Alex Bouzari said the biggest improvements over the SFA10K are the internal network inside the appliance, the storage processing that lets customers embed file systems or applications inside the appliance, and greater density.

The SFA12K has 64GB of memory and DDN claims it scales to 1 TBps with 25 arrays using InfiniBand or Fibre Channel connectivity. It also runs up to 16 virtual machines inside an array. The SFA12K holds up to 84 2.5-inch or 3.5-inch SAS or SATA disks in one array – up from 60 drives in the SFA10K – and 840 disks in a rack. The SFA12K supports up to 600TB of eMLC solid-state drives (SSDs).

The SFA12K platform consists of three products. The SFA12K-40 is the highest performing model, hitting 40 GBps of bandwidth and 1.4 million flash IOPs. A SFA12K-20 handles 20 GBps and 700,000 flash IOPS, according to DDN. The SFA12K-20E is available with DDN’s ExaScaler or GridScaler parallel file systems running on the SFA12K-20 array. Customers can also embed applications natively within the SFA12K-20E.

The SFA10K could deliver 800,000 flash IOPS and 15 GBps of bandwidth. Bouzari said a new architecture was needed to keep up with larger data sets, cloud computer requirements and data center power and footprint restraints.

“In HPC, people are asking for levels of performance that just cannot be achieved by following the same old approaches,” he said. “Today you have large data centers being built and types of processing requirements deployed inside data centers that cannot be met with traditional architectures.”

Bouzar said IBM and Hewlett-Packard are among the DDN partners who will resell the new platform. The SFA12K won’t go GA until the second quarter of 2012, but DDN said it has more than 50 PB of orders including a 15 PB purchase by the Leibniz Supercomputing Center (LRZ) in Munich. LRZ already uses DDN storage for its SuperMUC HPC supercomputer. DDN said Argonne National Laboratory has also purchased SFA12K technology for its IBM BlueGene/Q-based Mira supercomputer.


November 10, 2011  1:39 PM

Virident ready to make flash move with MLC card, funding

Dave Raffo Dave Raffo Profile: Dave Raffo

PCIe flash startup Virident Systems released its first multi-level cell (MLC) solid state card today and hauled in $21 million in funding.

Virident, whose main competitor is Fusion-io, picked up an influx of cash from its previous investors Globespan Capital Partners, Sequoia Capital and Artiman Ventures. Strategic investors Intel Capital, Cisco and a storage company that did not want to be identified also pitched in.

The funding round brings Virident’s total to $57 million, and it will need all the help it can get to compete with Fusion-io and other competitors coming into the market. Fusion-io raised $237 million when it went public last year and is hoping to raise another $100 million in a secondary offering of shares. LSI and STEC also sell PCIe flash cards, and EMC’s Project Lightning server-based PCIe flash product is in beta.

Virident’s new FlashMax card comes in single-level cell (SLC) and MLC versions. Virident’s first product, tachIO, was SLC only.

The FlashMax MLC card is available in 1 TB and 1.4 TB configurations. The vendor claims the MLC card has a 19 microsecond write latency and 62 microsecond read latency, a write performance of 600 MBps and read performance of one million IOPS The more expensive SLC version comes in 300 GB, 400 GB and 800 GB configurations, with 16 microsecond write latency and 47 microsecond read latency. It writes at 1.1 GBps and reads at 1.4 million IOPS.

Virident VP of marketing Shridar Subramanian said the MLC card costs less than half of the SLC card, with the MLC starting price at around $13,000.

Subramanian said Virident is working on OEM deals with larger server partners. He said Virident competes well against Fusion-io because of its superior IOPS performance, but admits his competitor is better known and was first to market.

“Many customers bought us because of capacity and performance,” he said. “Where [Fusion-io is] ahead of us is that they’ve been in the market longer, and people know more about Fusion-io than Virident.”

Like Fusion-io, Virident is counting on EMC’s Project Lightning product having limited appeal beyond high-end EMC customers. “Our customers are mostly in the direct attached storage market where EMC does not have a big presence,” he said. “EMC Project Lightning is cache, it’s an augmentation of EMC’s current infrastructure.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: