Storage Soup


December 15, 2009  8:39 PM

Virtual Instruments offers SOS-4-SAN service

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Virtual Instruments, a spinoff from optic communication and test and measurement equipment maker Finisar, is packaging its VirtualWisdom and NetWisdom tools into an “emergency SAN troubleshooting service” called SOS-4-SAN.

Virtual Instruments created the NetWisdom software (along with VirtualWisdom, which accounts for virtual server traffic) that can be used standaone to report on FC SAN performance or combine with Finisar’s Traffic Analysis Point (TAP) network sniffer that sits between a switch and FC SAN. With or without a TAP, the software copies FC SAN traffic, strips away the payload, and dumps header information into a database for analysis and diagnostics.

VI has gotten these tools into some large shops, such as Unilever, the parent company of several household name brands, including Bertolli pasta, Lipton Tea, Slim-Fast diet drinks and Dove soap. Unilever’s SAN vendor Hewlett-Packard brought VI into that installation. After seeing it at work, Unilever’s staff installed NetWisdom permanently in its 5 PB UK data center.

Seems like a pretty good path to market for VI (similar to the one taken by Procedo, which performs compliant data archive migrations and is recommended to customers by major vendors in that market), but this week Virtual Instruments is offering its troubleshooting tools as a managed service, with its own website, www.sos4sans.com.

“Many Global 2000 customers are not satisfied with the SAN troubleshooting skills of their current SAN component vendors,” VI VP of marketing Len Rosenthal said. “These vendors really only have visibility into their components (HBAs, switches, cabling, arrays) and, unlike Virtual Instruments, they don’t have the ability to measure transaction traffic flows and error conditions across the entire SAN infrastructure.”

However, the partner business isn’t going away, Rosenthal added. “Some of the larger vendors are beginning to resell VI services as a part of their global service offerings,” he said. “We are planning to work with more of them in the near future to either resell our services or train them to offer SAN troubleshooting services themselves using the NetWisdom and VirtualWisdom products.”

According to a VI press release, customers put in a request via website form or phone, and “a qualified Virtual Instruments services professional will immediately respond at no charge to the customer.” The VI services pro runs through a checklist of common problems to try and determine the severity of the problem. If necessary, VI personnel will travel to the customer site and remain there until the issues are resolved. Fees include a daily rate plus expenses for the services engineers.

December 14, 2009  8:13 PM

Broadcom jumps into FCoE fray, without Emulex

Dave Raffo Dave Raffo Profile: Dave Raffo

Broadcom, which failed in its bid to buy Emulex for its Fibre Channel over Etherenet (FCoE) stack earlier this year, is planning to demonstrate FCoE functionality on its 10GbE NetXtreme II controller Tuesday at its analyst day.

NetXtreme II is a single-ASCI card that Broadcom says can provide hardware-based acceleration and offload technology for FCoE and iSCSI. NetXtreme II cards are already shipping that support Layer 2 Ethernet and iSCSI. Burhan Masood, Broadcom’s senior manager of product marketing for high-speed controllers, says the FCoE capability was developed in-house.

“Broadcom is not new to storage, and we intend to be a serious player in this space,” Masood said.

QLogic went to market with the first single-ASIC converged network adapter (CNA) supporting FCoE, and has design wins with IBM, Dell and NetApp for its FCoE card. Emulex picked up its first FCoE design win for its OneConnect card last week with Verari Systems, which is restructuring amid rumors it is going out of business.

Masood says Broadcom has a leg up on QLogic’s single-ASIC 8100 series CNA because Broadcom supports iSCSI on its card. QLogic’s 8100 handle 10-Gig Ethernet and FCoE.

Hewlett-Packard, IBM, and Dell sell Broadcom NetExtreme II cards, and Masood says the card is available for server vendors to sample with FCoE functionality.

So, if Broadcom has FCoE capability on its unified adapters already, why did it need Emulex?

“Emulex may have helped with time to market, but one could argue about that,” Massoud said. “We tried to hedge our bets and had in-house development going and we were committed to that as well. This is a home-brewed solution.”


December 11, 2009  9:21 PM

HP and HDS say automated tiered storage is on the way in 2010

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It’s that time of year again, the time when we take stock of the past 12 months and look ahead to what’s coming in the New Year. This week, a topic on many minds was automated tiered storage in the wake of EMC’s shipment of version 1 of its long-awaited FAST (Fully Automated Storage Tiering) feature for Clariion, Symmetrix and Celerra disk arrays.

EMC competitors Hitachi Data Systems and Hewlett-Packard say they will have something similar, and they deny EMC has beaten them to the punch because FAST will not have block-level access coming until the second half of 2010 at the earliest. IBM is working on similar data placement software.

HDS’s Hu Yoshida pointed out that HDS has had a Tiered Storage Manager offering available that moves volumes among tiers of storage. It’s not the sub-LUN level experts have said will be necessary to spur enterprise SSD adoption, but then again, Yoshida said, neither is FAST version 1.

“EMC has been doing this for some time,” Yoshida pointed out, citing the SymOptimizer software that was available for DMX3 and DMX4, which at one EMC customer said caused his array to lock when it performed migrations (EMC says FAST does not lock the entire array, just the devices where data is being swapped).

Yoshida also said he wondered why EMC bothered with FAST 1. “It’s usually easier to start with the smallest unit, the chunk or page, than the LUN,” he said. “There’s also more benefit to customers with dynamic provisioning, and moving less data around means the performance overhead on the system’s internal bandwidth is lower.”

Yoshida said HDS is working on page-level automated tiered storage movement for next year.

Kyle Fitze, marketing director for the HP storage division, said HP will also have something similar to FAST next year. “We think that having policy-based automated migration is a key component for driving adoption of solid-state drives, which are still a very small single-digit percentage of the overall volume of disk drives shipped,” Fitzke said. Like HDS, HP has offerings that can move data nondisruptively today, but not automatically according to policy.

Also, at the high-end, HP and HDS’s disk array are one and the same under the covers — the HDS USP-V, which HP OEMs as the XP. It will be interesting to see if the automation each exec was talking about will come from one place or two…


December 11, 2009  9:41 AM

12-10-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

(0:22) EMC releases first version of FAST for automated tiered storage

(1:21) Seagate’s Pulsar enters solid-state drive market for servers 

(2:51) Symantec Veritas Storage Foundation V5.1 has solid-state drive auto-discovery, thin volume reclaim
Symantec offers subscription-priced Storage Foundation for Amazon cloud storage

(6:32) NetApp and Microsoft buddy up

(8:17) F5 Networks adds ARX2000 Nehalem-based midrange file virtualization switch


December 10, 2009  7:59 PM

Dell prepares for 10-gig iSCSI, FCoE

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell is adding support for 10-Gigabit Ethernet and making other enhancements to its EqualLogic iSCSI SAN arrays as part of today’s launch of 10-GigE products throughout its networking and storage platforms.

Dell has turned its $1.4 billion 2008 acquisition of EqualLogic into the iSCSI market lead, with 34% share according to IDC’s Storage Tracker figures for the third quarter. Today it unveiled a larger capacity system, the EqualLogic PS6500X, and added 10-gigE controllers to its PS6010 and PS6510 arrays. Dell will also support 100 GB SSDs (it currently offers 50 GB SSDs) on EqualLogic PS6510S arrays.

The 6500X with 10,000 rpm SAS drives can scale to 16 nodes per group (up from 12 in previous systems) for 460TB of capacity per group. The 16-drive PS6010 and 48-drive 6510 systems support two 10-gigE ports per controller for a total of four 10-GigE ports per system. Current EqualLogic customers will be able to add a 10-gigE node to upgrade.

Along with 10-GigE iSCSI, Dell is adding Fibre Channel over Ethernet (FCoE) support through storage networking partners QLogic and Brocade. Dell will begin shipping the QLogic 8152 converged network adapter (CNA) for Fibre Channel over Ethernet (FCoE) for PowerEdge servers and the QLogic 8142 mezzanine card CNA for PowerEdge blades this month, and Brocade DCX Fibre Channel, 8000 series FCoE and RX Ethernet switches in February.

Despite making a splash around its 10-gigE support, Dell senior manager Travis Vigil said he expects a slow migration from gigabit Ethernet. He says early adopters will be in one of two groups: customers running heavy database workloads and those with many virtual servers.

“There’s a small vocal group of customers interested in 10-gig, and they tend to be customers running sequential workloads,” Vigil said. “Also, a lot of virtual environments need bandwidth for 10-gig servers. We think it will be a small but growing course of adoption over the next couple of years.”

Analysts say organizations want to know their vendor has 10-gigE available for an upgrade path, but won’t necessarily rush out to upgrade. iSCSI has grown substantially in recent years largely with GigE, although Hewlett-Packard reports 10-GigE adoption if its LeftHand Networks iSCSI SANs.

Forrester Research senior analyst Andrew Reichman said having 10-gigE capability paves the way for converged Fibre Channel and Ethernet networks. “Unified fabrics between the SAN and LAN will happen with 10-gig, not 1-gig Ethernet,” he said. “For iSCSI to be a viable contender in unified traffic, it has to support the jump to 10-gig. Because it runs at 1-gig now, it can be a gradual transition and less disruptive than FCoE.”

IDC research manager for storage systems Natalya Yezkhova says she doesn’t anticipate an immediate boost in adoption of 10-gigE but sees it implemented in high volumes by 2011.

“In two or three years, 10-gig will largely replace 1-gig,” she said. “I don’t expect iSCSI growth will accelerate with 10-gig, it will be more an an organic replacement [of GigE systems]. It’s no surprise Dell is doing this based on its focus on iSCSI the past two years.”


December 10, 2009  3:30 PM

IBM finds a slot for Fusion-io

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Weeks after rolling out solid-state drive (SSD) support in its SAN Volume Controller (SVC) based on SSDs from STEC, IBM has confirmed official support for Fusion-io — the partner it first worked with to preview SSDs in SVC.

However, rather than the SVC storage virtualization appliance, Fusion-io Flash storage will go inside System x servers in a “server-deployed storage tier,” according to the IBM press release. IBM calls the device IBM High IOPS SSD PCIe Adapter for System x. It’s available in 160 GB and 320 GB capacities.

When IBM disclosed its STEC partnership for SVC, it was a surprise, given the work it had already done with Fusion-io in a test bed it called Project Quicksilver. IBM officials said they went with STEC drives instead because Fusion-io’s devices would be a separate unit attached to the SVC, while STEC’s SSDs fit inside the SVC and require less rack space and power requirements for customers.

IBM is aiming the High IOPS Adapter based on Fusion-io at use cases such as “data-heavy graphics and 3-D renderings from medical research.”


December 9, 2009  5:05 PM

NetApp and Microsoft buddy up

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Continuing a trend of alliances among large IT vendors this year, NetApp Inc. and Microsoft Corp. this week revealed a “formalization” of their strategic partnership, with a strong focus on integrating Microsoft’s Hyper-V server virtualization software with NetApp storage systems.

The two companies have integrated products in the past, including NetApp’s SnapManager software, which allows its storage arrays’ snapshots to be controlled from Microsoft applications in their native management console. But NetApp vice president of solutions and alliances Patrick Rogers says this is the first formal agreement between the two companies.

The new three-year agreement will see “top to bottom” integration points between Microsoft applications and the Windows operating system with NetApp storage products. Rogers said NetApp’s ApplianceWatch Pro 2.0, which added discovery, health monitoring, and performance monitoring of NetApp storage systems with Microsoft Systems Center Operations Manager, “warranted a longer-term committment” between the vendors.

The two companies will be “aligning roadmaps” around virtual infrastructure, application-based storage management, and cloud computing going forward, Rogers said. According to a joint press release announcing the partnership, joint Microsoft and NetApp products will also be on display at Microsoft Technology Centers and at industry events.

Rogers and Microsoft’s Microsoft director of virtualization strategy David Greschler insisted the “formalization” of their partnership was not a response to the recently formalized alliance between Microsoft rival VMware, NetApp rival EMC, and Cisco. Those three also pledged to align roadmaps and product development going forward under an alliance called VCE.

“That’s about a closed system,” Greschler said of VCE. “Microsoft has always been about working with partners, but we’re not locked into one approach.”

Ostensibly, VCE isn’t either — EMC CEO Joe Tucci made much of the VCE vendors continuing to offer “a la carte” products as well as the “fixed menu” of vBlock stacks. “It’s completely coincidental,” said Rogers. “This strategic alliance agreement has been in process since last summer.” He added that the Microsoft/NetApp alliance “will focus on applications as well as virtualization.”

Whether competition with VCE is the intent of the alliance or not, Taneja Group analyst Jeff Boles predicted in an email to Storage Soup yesterday that that’s where the impact of this partnership will be felt.

“NetApp still remains pretty closely coupled to the VMware infrastructure, and I think if anything, even post-VCE, they’ll still be gaining ground there,” Boles wrote. “I think this is actually an incredibly important announcement from Microsoft’s perspective, because this is the one area that they are substantially weaker than VMware in. In my book, storage for Microsoft is a bigger deal even than memory oversubscription.

“Meanwhile, NetApp simplifies storage management underneath a virtual infrastructure, whether that is VMware or Hyper-V. In response to some of the storage challenges, VMware is still struggling in some areas (messing with a multi-extent VMFS volume and then figuring out where and how to protect data is less than lots of fun). I’ve seen users shift to NetApp as the storage layer to overcome some of those issues. If Microsoft gets a leap on VMware’s storage capabilities through partnership with NetApp (which at the end of the day the [EMC] Celerra guys will probably mimic), then they might be able to throw down with VMware in interesting new ways.”


December 8, 2009  4:02 PM

Clearpace becomes RainStor

Dave Raffo Dave Raffo Profile: Dave Raffo

A startup that deduplicates and archives structured data to the cloud has a new name and new home.

Clearpace Software today changed its name to RainStor and moved its headquarters from the U.K. to San Francisco. RainStor was already the name of the archiving service Clearpace provided around its NPArchive database archiving software. The vendor will keep its developers in the U.K., but is moving operations to San Francisco because its future depends upon building industry partnerships.

Today, the company also released RainStor 3.5, aimed at customers such as telcos that deal with billions of customer data records. RainStor software can be delivered as a virtual appliance, embedded in an application through partners or integrated into a cloud service.

RainStor CEO John Bantleman claims his company’s software can get 40-1 compression of data inside databases, and customers can retrieve data from archives without reinflating it. RainStor calls its technology “pattern deduplication,” which stores individual patterns inside a database once to reduce the amount of data in the archive. Customers can query the archived data.

“We’re not talking about offline and tape archiving,” Bantleman said. “We’re talking about giving customers the ability to access historical information at the same level of performance that they can when retrieving data from online storage.”

Analyst Simon Robinson, storage research director for The 451 Group, says RainStor appears best positioned as a software as a service (SaaS) play. RainStor’s SaaS Data Escrow service pprovides a third-party copy of data to ensure the data within SaaS applications is always available, and its Application Retirement service lets organizations keep historical data from legacy applications that are retired during migrations to new software applications.

“I don’t see anybody else doing what they’re doing on a technology level,” Robinson said.
Database archiving never really achieved its initial promise. They take it a step further with massive compression, and you’re able to reinstate data without having to reinflate it.”

RainStor has an OEM deal with data integration vendor Informatica, and NPArchive is integrated with EMC Centera. It claims more than 50 customers, including several telcos. Its ability to put together a partner channel will likely determine RainStor’s success, especially if its future is in the cloud.

“We believe the use of the cloud for archiving and archiving services will dominate the industry,” Bantleman said.


December 7, 2009  5:21 PM

IDC numbers show some third quarter storage recovery

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Recent market research shows the storage industry began a rebound in the third quarter after two quarters of steep declines.

The first indications of this came from publicly traded storage vendors’ earnings reports for the quarter, most of which showed storage revenues still below last year’s levels while increasing over the first and second quarters of 2009.

According to IDC’s Quarterly Disk Tracker :

Worldwide external disk storage systems factory revenues posted a 10.0% decline year over year, totaling $4.4 billion in the third quarter of 2009 (3Q09), according to the IDC Worldwide Quarterly Disk Storage Systems Tracker. For the quarter, the total disk storage systems market declined to $6.0 billion in revenues, a 9.6% decline from the prior year’s third quarter

Steve Scully, research manager for Enterprise Storage at IDC, says these numbers represent an increase over last quarter when external revenue was $4.121 billion and overall storage revenue totalled $5.665 billion.

IDC’s storage software report shows revenue of $2.87 million in the third quarter, a 7.9% decline from last year but a 1.2% increase from the second quarter.

Gartner’s latest results showed a similar pattern. According to Gartner, the $3.9 billion in storage disk revenue last quarter represented a 7.3% decline from last year following double-digit declines in the first two quarters of 2009.

This pattern — a sharp decline in the first quarter, stabilization in the second and third quarter and a slight

increase for the fourth quarter — was predicted early in the year by some storage vendors and analysts. If this curve continues to play out, storage sales should see further recovery heading into 2010, though many industry watchers say the explosive growth period in systems between 2004 and 2007 will not be matched in the near future.


December 4, 2009  8:58 AM

12-03-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Welcome back from the Thanksgiving break. Here are the stories you may have missed:

(0:25) EMC Centera Virtual Archive enables larger Centera clusters

(1:45) Brocade CEO Mike Klayko: We’re not for sale

(3:03) Scale Computing develops IBM’s GPFS for midmarket scale-out multiprotocol storage

(6:11) Nexenta Systems pushes NexentaStor forward with open storage and ZFS

(8:23) IBM chief engineer Barrera talks EMC-Cisco, XIV and solid-state drives


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: