Storage Soup

September 29, 2011  1:35 PM

QLogic takes another whack at converged storage networks

Dave Raffo Dave Raffo Profile: Dave Raffo

QLogic is taking the stance that having multiple personalities is the sane way to approach converged storage networking. With Fibre Channel (FC) remaining the dominant protocol and Ethernet becoming a better candidate for SANs, QLogic has new gear that supports the latest flavors of both.

The storage networking vendor updated its product platform to 16 Gbps Fibre Channel this week, including a switch that supports FC and 10 Gigabit Ethernet (10GbE) ports to give it what QLogic calls “dual personalities.” QLogic also launched its 8300 Series Converged Network Adapter (CNA) that supports Ethernet, Fibre Channel over Ethernet (FCoE) and iSCSI, and the 2600 Series 16 Gbps FC HBA.

The Universal Access Point 5900 (UA5900) can be configured to run 16 Gbps Fibre Channel or 10 GbE traffic. Customers can start with 24 device ports and grow to 68 ports by adding licenses. Four of the ports can be used as 64 Gbps Fibre Channel trunking ports, and the switches can stack to 300 device ports. The UA5900 can be a Fibre Channel or Ethernet edge switch, and — with a Converged Networking license – it can serve as a top-of-rack FCoE switch to compete with Brocade’s 8000 and Cisco’s 5548UP devices.

QLogic also said it would bring out an intelligent storage router – called the iSR6200 – with support for Fibre Channel, FCoE and iSCSI. The router is designed for SAN-over-WAN connectivity.

The UA5900 and adapters are expected to ship through QLogic’s OEM and channel partners in early 2012, with the iSR6200 expected late next year.

QLogic was one of Cisco’s early allies in delivering FCoE gear years ago, and is on its third generation of converged networking devices. But FCoE has gained little adoption and Fibre Channel isn’t going away. QLogic execs say they expect Fibre Channel to remain strong while FCoE is a longer term item for many organizations. “We expect over the longer period, FCoE will gain momentum,” QLogic director of product marketing Craig Alesso said. “But Fibre Channel is still the workhorse for most enterprises.”

When FCoE does gain momentum, what role will hardware adapters play? Intel has launched software FCoE initiators that use host processing power and work with any network adapters. Intel’s plan is to eliminate the need for CNAs, but Alesso said QLogic’s adapters will have a big role in running FCoE. He maintains that CNAs are better suited for I/O processing and server CPUs should be used for applications.

“People can run FCoE initiators, but there’s a [performance] cost,” he said. “We free up servers to do what customers want to do with servers – run multiple virtual machines and multiple applications. The CPU should be used for running applications, not the I/O. We should run the I/O. Also, with [software] initiators, you lose management. You don’t have the common look and feel among management utilities.”

September 28, 2011  12:43 PM

Arkeia adds dedupe, SSDs to backup appliances

Dave Raffo Dave Raffo Profile: Dave Raffo

Arkeia Software CEO Bill Evans has watched Symantec roll out a steady stream of backup appliances over the last year, and he asks, “What took so long?”

Arkeia began delivering its backup software on appliances four years ago, and this week launched its third generation of appliances. They include the data deduplication that Arkeia added to its software a year ago, solid state drives (SSDs) to accelerate updates to the backup catalog, and up to 20 TB of internal disk on the largest model.

“Since 2007, we’ve been telling everybody that appliances would be big,” Evans said. “Symantec has validated the market for us.”

Evans said about 25% of Arkeia’s customers buy appliances. Because they take less time to set up and manage, he said appliances are popular in remote offices and among organizations without much IT staff.

The new appliances are the R120 (1 TB usable), the R220 (2 TB, 4 TB or 6 TB), the R320 (8TB or 16 TB) and the R620 (10 TB or 20 TB). The two smaller models include optional LTO-4 tape drives while the two larger units support 8 Gbps Fibre Channel to move data off to external tape libraries and RAID 6. They all include Arkeia Network Backup 9 software and built-in support for VMware vSphere. Arekeia’s progressive dedupe for source and target data is included with the R320 and R620, and optional with the R220. Pricing ranges from $3,500 for the R120 to $47,000 to the R620 with 20 TB.

The R620 includes 256 GB SSDs, enough to manage the backup catalog. “We would never put backup sets on SSDs, that would be too expensive,” Evans said. “But it makes sense to use SSDs to manage our catalog, which is a database of our backups. The catalog is random, and updating the catalog could be a performance bottleneck.”

“If we were simply a cloud gateway and combined SSDs and disk in a single package, we wouldn’t know what incoming data should live on SSD and what should live on disk. It all looks the same. Because we wrote the [backup] application, we could say ‘this data lives on disk and this data lives on SSD.’”

For disaster recovery, the appliances can be used to boot a failed machine by downloading software from a backup server to the failed machine. The appliances can also replicate data to cloud service providers.

September 27, 2011  3:50 PM

FalconStor founder Huai found dead

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor founder ReiJane Huai, who stepped down as CEO last year after disclosing accusations of improper payments to a customer, was found dead from a gunshot Monday outside his Old Brookville, N.Y. home. Police have told New York newspapers his death was an apparent suicide.

Huai, 52, also served as CEO of Cheyenne Software before leading FalconStor for a decade. He resigned and was replaced as CEO by Jim McNiel when government agencies began investing the vendor’s accounting practices.

According to newspaper accounts, Huai was found shot in the chest Monday morning. In a statement to Newsday, a FalconStor spokesman called Huai “a visionary and a leader” who was “admired and respected by a great many people.”

Huai came to the United States from his native Taiwan in 1984 to study computer science at New York’s Stony Brook University. He joined Cheyenne Software in 1985 as a manager of research and development for its ARCserve backup product, worked at AT&T Bell Labs from 1987 to 1988, and returned to Cheyenne as director of engineering in 1987. He became Cheyenne CEO in 1993 and sold the company to CA in 1996 for $1.2 billion. After a brief stint at CA, Huai founded FalconStor in late 2000, and held its CEO and chairman titles until last September.

Huai resigned from FalconStor last Sept. 29 after he disclosed that improper payments were allegedly made in connection with licensing of FalconStor software to a customer. The company began an internal investigation at the time, and so did the New York County District Attorney, the U.S. Attorney’s Office for the Eastern District of New York and U.S. Securities and Exchange Commission (SEC). None of the investigations have released any findings.

FalconStor has received grand jury subpoenas from the SEC and the U.S. Attorneys’ Office, and the SEC issued a subpoena seeking documents relating to the vendors’ dealing with the customer in question. The U.S. Attorney’s Office grand jury subpoena sought documents relating to some FalconStor employees and other company information.

FalconStor executives have claimed in public statements and SEC documents that it is cooperating with both investigations.

Two class actions lawsuits were also filed against FalconStor last year alleging the company made false statements because it failed to disclose weak demand for products and that it made improper payments to a customer. Huai was named in those suits along with FaclonStor CFO James Weber and board member Wayne Lam.

September 20, 2011  12:51 PM

DataDirect Networks discusses new system, IBM relationship

Dave Raffo Dave Raffo Profile: Dave Raffo

DataDirect Networks (DDN) today launched a new member of its Storage Fusion Architecture (SFA) family of high-performance computing (HPC) arrays, and quickly pointed out a large customer deal involving the new system and IBM’s General Parallel File System (GPFS).

DDN claims the SFA10000-X can handle mixed workload read-write speeds of 15 GBps with solid-state drives (SSDs). It holds up to 600 drives for a maximum capacity of 1.8 PB in a rack. DDN aims the system at Big Data (analytics and a large number of objects), media and content-intensive applications. It will replace the S2A9900. DDN already has a SFA10000-E system aimed at highly virtualized environments.

DDN said Italian research center Cineca in June acquired a SFA10000-X from IBM. DDN Marketing VP Jeff Denworth offers the deal as proof that the relationship with DDN and IBM remains solid. IBM recently issued an end-of-life notice to customers for its DCS9900 – based on DDN’s S2A9900 — and suggested the DCS3700 that IBM sells from DDN competitor NetApp Engenio as a replacement.

The Engenio platform has competed with DDN for years, and is now in the hands of NetApp – another IBM partner. Denworth said IBM and DDN still have OEM deals for two other systems – including the S2A 6620 that IBM sells as a backend to its SONAS — and said IBM may have plans for the SFA10000-X.
“IBM discontinued one system among the portfolio we sell through them, and that system is four-year-old technology,” he said.

So why didn’t IBM replace the SFA99000 with the SFA10000-X? “All I can say is the SFA10000-X has a certain customer profile,” Denworth said. “I can’t make any statements about IBM’s intentions for that product.”

DDN executives call DDN the world’s largest privately held storage vendor, and claim they are doing well enough that the loss of any single partner wouldn’t break the company. DDN claims 83% revenue growth from 2007 through 2010 and is on a pace for more than $200 million in revenue this year.

Yet despite a flurry of storage system vendor acquisitions last year and others looking to go public, DDN remains independent and private. DDN EVP of strategy and technology Jean-Luc Chatelain said an IPO will only happen if the terms are enticing enough.

“We’re privately held, and we like it that way,“ he said. “An IPO is not an end for us, it’s a means. If we can use an IPO as a tool for additional currency for growth, we’ll look at that.”

DDN is growing its executive team. Chatelain joined from Hewlett-Packard in February. This month DDN hired former HP executive Erwan Menard as COO, Adaptec veteran Christopher O’Meara as CFO, and Quantum veteran William Cox as VP of worldwide channel sales.

On the technology front, DDN is using enterprise multi-level cell (eMLC) SSDs for the first time with the SFA10000-X. It is also embracing the Big Data label that storage vendors have been throwing around since EMC acquired scale-out NAS vendor Isilon late last year.

“DDN has been doing Big Data since 1998, everybody else is ust catching up,” Chatelain said. “I don’t like the term, but everybody’s using it now. Our customers do Big Data for a living.”

September 19, 2011  12:25 PM

Vendors should focus on customers, not competitors

Randy Kerns Randy Kerns Profile: Randy Kerns

Competitive pressures often cause companies to lose focus when adopting product marketing strategies. These pressures come from executives and boards, and can be intense. They can also cause a vendor to pay attention to the wrong things, instead of putting the attention on the customer.

Vendor strategies must start with a few basics: What is the best way to position a product, and what product characteristics are necessary to meet future needs? Positioning a product is foremost about fitting customer needs. Describing how it fits those needs can be done in many ways, and typically there are multiple approaches taken in addition to data sheets and product specifications. These include:

• A short description of how the product can be used to meet the customer’s needs.
• A longer document that has details of usage in a specific environment.
• A white paper that explains the product in context of the value it can bring.

Positioning statements usually includes how a product fares against the competition. One sign of misguided focus is when the lead information about how competitive the product is starts with the negatives of a competing product. By starting with competitors’ negatives instead of laying out its product’s advantages, a vendor risks wasting the limited time a customer will spend on the material. For us at Evaluator Group, when we put together our Evaluation Guides for customers, starting with the negatives is a big red flag.

Delivering a product that meets future needs is another area where a company can get its focus skewed. Common focus miscues include:

• Lacking an intimate understanding of customer operational characteristics and their business processes.
• Lacking good judgment of the adoption probability within a specific timeframe of new technology by customers.
• Using general surveys to predict future customer needs.
• Watching what competitors are doing and trying to follow their lead.

These mistakes lead vendors to look in the rear-view mirror. Instead of looking out the windshield when making plans, they look back to see what has already happened.

Keeping the pressures in perspective and maintaining focus on how to position and deliver products can be tough for some companies. Those that do it well are more successful and from our perspective have a better handle on the competitive environment. Companies that have allowed their focus to shift make big mistakes and become less competitive.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

September 16, 2011  1:15 PM

Storage Headlines for September 16, 2011

Mkellett Megan Kellett Profile: Mkellett

Check out our Storage Headlines podcast, where we review the top stories of the past week from and Storage Soup.

Here are the stories we covered in today’s podcast:

(0:22) Kaminario, Anobit add multi-level cell flash options

(0:49) Data storage startups: 10 emerging vendors worth watching

(1:33) Quantum StorNext appliance line grows

(2:18) Are you ready for SSD sprawl?

(3:11) Hitachi snaps up BlueArc

(3:40) Storage Products of the Year 2011

September 13, 2011  2:34 PM

Changing definitions for storage systems

Randy Kerns Randy Kerns Profile: Randy Kerns

At an alumni event recently I spoke with several friends who are also engineers but in different disciplines (power systems, chemical engineering, and geology). They commented about how the price of storage had declined so significantly over time, and talked about storage they had just seen in a local retail outlet.

I explained that what they were referring to was called consumer storage and how it was significantly different than storage systems used in businesses. I went through the different attributes expected from enterprise class storage systems. The features offered by higher-end storage systems such as snapshot and remote replication took a while to explain. It was easier to explain concepts such as testing, support, and service contracts because I could relate those to equipment used in their industries.

I was not convincing because they asked why not just take the consumer storage, add software to the server it was attached to, and provide all those functions and have multiples of them in case there is a failure in one. The important point they were making was by doing that, they could get the consumer prices and either just pay for the added software or use freeware.

That led me to thinking that my friends (unintentionally, I believe) were actually describing some business storage systems that we’re seeing today. These products – examples include the Hewlett-Packard P4000 Virtual SAN Appliance (VSA), Nutanix Complete Cluster, and the VMware vSphere Storage Appliance – include a group of servers with disks running a storage application.

Some of these are more sophisticated than that simple description with the integration of multiple elements and the differentiation of capabilities of the storage application (not to mention the maturity). But, the concept is similar. These products bring new options and require new definitions to describe storage systems. These can be a variation of a Storage Appliance or other, more unique names.

This new definition of storage systems would include be the virtual machines that run a storage application to federate storage attached to physical servers. Consideration of these systems is definitely warranted when evaluating solutions to storage demands. While the new options may make the evaluation more complicated, additional options typically lead to cost advantages. And that’s the point my friends were really making to me — more or less.

 (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

September 12, 2011  2:25 PM

Are you ready for SSD sprawl?

Mkellett Megan Kellett Profile: Mkellett

IBM is talking up the results of a recent survey Zogby International did on its behalf. There are some not-so-surprising results—most of the IT pros are concerned about data growth and are looking for new solutions to their troubles. But one thing did jump out: the popularity of solid-state drives (SSDs). About two-thirds of those surveyed are using SSD technology or plan to. The holdouts are discouraged for now by high costs, according to the survey.

Now, says IBM, we could be looking at a new trend: “SSD sprawl.” That’s like server sprawl, but with a new twist.

According to Steve Wojtowecz, vice president, Tivoli Storage Software Development at IBM, users who continue to tack SSDs to their legacy equipment could create the risk of SSDs taking on more workloads than they can handle – and creating the same sort of trouble that too many virtualized servers can.

“IT departments are worried about ‘SSD sprawl’,” said Wojtowecz. “This is similar to the server sprawl back at the start of the client-server days when departments would go off and buy their own servers and IT to support their own application or department. Over time, there were hundreds of servers purchased outside the IT procurement and management process, and, over time, the companies were left with hundreds of thousands [of dollars] worth of computer power being woefully under-utilized, explained Wojtowecz.

“ IT teams remember this, “ he said, “and are trying very hard to prevent the same situation happening with SSDs.”

September 7, 2011  9:51 PM

Hitachi Data Systems snaps up BlueArc

Brein Matturro Profile: Brein Matturro

By Sonia Lelii, Senior News Editor

Hitachi Data Systems earlier today announced it has scooped up its OEM partner BlueArc for $600 million, and hours after the news broke, not many seemed to be taken aback by the acquisition that gives HDS its own NAS platform.

Continued »

September 2, 2011  12:04 PM

VMWorld notebook: Symantec prepares Storage Foundation 6 with primary dedupe

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS – Storage-related notes from this week’s VMWorld 2011:

Symantec has done a lot lately to try and catch up to smaller rivals on virtual machine support for its backup products. Now it is ready to support virtualization in its storage management products, as well as data deduplication for primary data.

Symantec is preparing to give Storage Foundation its first major release of in five years. The main focus for the overdue upgrade will be support for mixed environments, which means virtual servers and the cloud as well as physical servers.

The Storage Foundation 6 launch will come in a month or two, but Symantec senior VP of storage management Anil Chakravarthy filled me in on a few details. He said the goal is to allow customers to run Storage Foundation in any operating system and on any hypervisor.

“We’re taking the existing products [in Storage Foundation suite] and orienting them to mixed environments,” he said. “Customers can mix and match the applications on a combination of platforms.”

Symantec is moving ApplicationHA into the Storage Foundation suite and that will also get an upgrade, along with Cluster File System and Veritas Operations Manager. ApplicationHA has been a standalone product until now.

Chakravarthy said Storage Foundation will enable primary dedupe at the file system level, and work with any NAS or SAN systems. He also claims Symantec will get more granularity than storage array vendors who have or are adding primary dedupe.

“We’ve had it in our backup products,” Chakravarthy said. “Now we’ve taken the dedupe engine and built it into the file system for primary data. Putting it at the file system level gives us granularity that you cannot have from deduping at the array level.”

One area that Symantec is staying out of for now is sub-LUN automated tiering. Storage Foundation already has what it calls Smart Tiering at the LUN level, but Chakravarthy said sub-LUN tiering is best handled at the array. …

One of the less publicized features in vSphere 5 is native support in ESX of Intel’s Open Fibre Channel over Ethernet (FCoE). Naturally, Intel claims this support is a big deal.

Intel announced Open FCoE last January, claiming it will do for FCoE what software initiators did for iSCSI. That is, it will enable FCoE on standard NICs without additional hardware adapters. VMware vSphere 5 customers can use a standard 10-Gigabit Ethernet (GbE) adapter for FCoE connectivity instead of a more costly Converged Network Adapter (CNA). Intel supports Open FCoE on its Ethernet Server Adapter X520 and 82599 10 Gb Ethernet Controller cards.

Intel’s approach to FCoE requires key partners to support its drivers. Windows and Linux operating systems support FCoE, but earlier versions of vSphere did not. Sunil Ahluwalia, senior producct line manager for Intel’s LAN Access Division, said vSphere 5 customers running Intel’s supported adapters don’t have to add specialized Converged Network Adapters (CNAs) to their networks. He said the concept is similar to Microsoft’s adding the iSCSI initiator to its stack in the early days of iSCSI, eliminating the need for TCP/IP offload engine (TOE) cards.

“We’ve seen that model be successful with iSCSI, and we’re taking the same steps now with FCoE,” he said. “Once you get it native in a kernel, it comes as a feature in the operating system and frees up the network card to be purely a network card.”

FCoE adoption has been slow, but Ahluwalia said he expects it to pick up after 10 GbE becomes dominant in networks. “Customers are looking at moving to 10-gig first,” he said. “As they roll out their next infrastructure to 10-gig and a unified network, FCoE and iSCSI will be additional benefits.” …

The primary data reduction landscape should start heating up soon. Besides Symantec adding primary dedupe to Storage Foundation, IBM and Dell are close to integrating dedupe and compression technologies they picked up through acquisitions last year.

A source from IBM said it will announce integration with Storwize compression on multiple IBM storage systems this fall, and Dell is planning to do the same with its Ocarina deduplication technology over the coming months.

In a recent interview with and Storage magazine editors, Dell storage VP Darren Thomas said Dell products using Ocarina’s dedupe technology will start showing up late this year with more coming in 2012.

“We’ve been integrating Ocarina,” he said. “It will start appearing in multiple places. You’ll see a [product] drop this year and more than likely a couple more next year.”

Sneak peaks of EMC’s Project Lightning server-side PCIe flash cache product showed up in several EMC-hosted VMWorld sessions. The product appeared in demos and tech previews, and EMC VP of VMware Strategic Alliance Chad Sakac said it will be in beta soon and scheduled for general availability by the end of the year. EMC first discussed Project Lightning at EMC World in May but gave no shipping date.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: