Storage Soup


September 16, 2011  1:15 PM

Storage Headlines for September 16, 2011

Mkellett Megan Kellett Profile: Mkellett

Check out our Storage Headlines podcast, where we review the top stories of the past week from SearchStorage.com and Storage Soup.

Here are the stories we covered in today’s podcast:

(0:22) Kaminario, Anobit add multi-level cell flash options

(0:49) Data storage startups: 10 emerging vendors worth watching

(1:33) Quantum StorNext appliance line grows

(2:18) Are you ready for SSD sprawl?

(3:11) Hitachi snaps up BlueArc

(3:40) Storage Products of the Year 2011

September 13, 2011  2:34 PM

Changing definitions for storage systems

Randy Kerns Randy Kerns Profile: Randy Kerns

At an alumni event recently I spoke with several friends who are also engineers but in different disciplines (power systems, chemical engineering, and geology). They commented about how the price of storage had declined so significantly over time, and talked about storage they had just seen in a local retail outlet.

I explained that what they were referring to was called consumer storage and how it was significantly different than storage systems used in businesses. I went through the different attributes expected from enterprise class storage systems. The features offered by higher-end storage systems such as snapshot and remote replication took a while to explain. It was easier to explain concepts such as testing, support, and service contracts because I could relate those to equipment used in their industries.

I was not convincing because they asked why not just take the consumer storage, add software to the server it was attached to, and provide all those functions and have multiples of them in case there is a failure in one. The important point they were making was by doing that, they could get the consumer prices and either just pay for the added software or use freeware.

That led me to thinking that my friends (unintentionally, I believe) were actually describing some business storage systems that we’re seeing today. These products – examples include the Hewlett-Packard P4000 Virtual SAN Appliance (VSA), Nutanix Complete Cluster, and the VMware vSphere Storage Appliance – include a group of servers with disks running a storage application.

Some of these are more sophisticated than that simple description with the integration of multiple elements and the differentiation of capabilities of the storage application (not to mention the maturity). But, the concept is similar. These products bring new options and require new definitions to describe storage systems. These can be a variation of a Storage Appliance or other, more unique names.

This new definition of storage systems would include be the virtual machines that run a storage application to federate storage attached to physical servers. Consideration of these systems is definitely warranted when evaluating solutions to storage demands. While the new options may make the evaluation more complicated, additional options typically lead to cost advantages. And that’s the point my friends were really making to me — more or less.

 (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


September 12, 2011  2:25 PM

Are you ready for SSD sprawl?

Mkellett Megan Kellett Profile: Mkellett

IBM is talking up the results of a recent survey Zogby International did on its behalf. There are some not-so-surprising results—most of the IT pros are concerned about data growth and are looking for new solutions to their troubles. But one thing did jump out: the popularity of solid-state drives (SSDs). About two-thirds of those surveyed are using SSD technology or plan to. The holdouts are discouraged for now by high costs, according to the survey.

Now, says IBM, we could be looking at a new trend: “SSD sprawl.” That’s like server sprawl, but with a new twist.

According to Steve Wojtowecz, vice president, Tivoli Storage Software Development at IBM, users who continue to tack SSDs to their legacy equipment could create the risk of SSDs taking on more workloads than they can handle – and creating the same sort of trouble that too many virtualized servers can.

“IT departments are worried about ‘SSD sprawl’,” said Wojtowecz. “This is similar to the server sprawl back at the start of the client-server days when departments would go off and buy their own servers and IT to support their own application or department. Over time, there were hundreds of servers purchased outside the IT procurement and management process, and, over time, the companies were left with hundreds of thousands [of dollars] worth of computer power being woefully under-utilized, explained Wojtowecz.

“ IT teams remember this, “ he said, “and are trying very hard to prevent the same situation happening with SSDs.”


September 7, 2011  9:51 PM

Hitachi Data Systems snaps up BlueArc

Mkellett Brein Matturro Profile: Brein Matturro

By Sonia Lelii, Senior News Editor

Hitachi Data Systems earlier today announced it has scooped up its OEM partner BlueArc for $600 million, and hours after the news broke, not many seemed to be taken aback by the acquisition that gives HDS its own NAS platform.

Continued »


September 2, 2011  12:04 PM

VMWorld notebook: Symantec prepares Storage Foundation 6 with primary dedupe

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS – Storage-related notes from this week’s VMWorld 2011:

Symantec has done a lot lately to try and catch up to smaller rivals on virtual machine support for its backup products. Now it is ready to support virtualization in its storage management products, as well as data deduplication for primary data.

Symantec is preparing to give Storage Foundation its first major release of in five years. The main focus for the overdue upgrade will be support for mixed environments, which means virtual servers and the cloud as well as physical servers.

The Storage Foundation 6 launch will come in a month or two, but Symantec senior VP of storage management Anil Chakravarthy filled me in on a few details. He said the goal is to allow customers to run Storage Foundation in any operating system and on any hypervisor.

“We’re taking the existing products [in Storage Foundation suite] and orienting them to mixed environments,” he said. “Customers can mix and match the applications on a combination of platforms.”

Symantec is moving ApplicationHA into the Storage Foundation suite and that will also get an upgrade, along with Cluster File System and Veritas Operations Manager. ApplicationHA has been a standalone product until now.

Chakravarthy said Storage Foundation will enable primary dedupe at the file system level, and work with any NAS or SAN systems. He also claims Symantec will get more granularity than storage array vendors who have or are adding primary dedupe.

“We’ve had it in our backup products,” Chakravarthy said. “Now we’ve taken the dedupe engine and built it into the file system for primary data. Putting it at the file system level gives us granularity that you cannot have from deduping at the array level.”

One area that Symantec is staying out of for now is sub-LUN automated tiering. Storage Foundation already has what it calls Smart Tiering at the LUN level, but Chakravarthy said sub-LUN tiering is best handled at the array. …

One of the less publicized features in vSphere 5 is native support in ESX of Intel’s Open Fibre Channel over Ethernet (FCoE). Naturally, Intel claims this support is a big deal.

Intel announced Open FCoE last January, claiming it will do for FCoE what software initiators did for iSCSI. That is, it will enable FCoE on standard NICs without additional hardware adapters. VMware vSphere 5 customers can use a standard 10-Gigabit Ethernet (GbE) adapter for FCoE connectivity instead of a more costly Converged Network Adapter (CNA). Intel supports Open FCoE on its Ethernet Server Adapter X520 and 82599 10 Gb Ethernet Controller cards.

Intel’s approach to FCoE requires key partners to support its drivers. Windows and Linux operating systems support FCoE, but earlier versions of vSphere did not. Sunil Ahluwalia, senior producct line manager for Intel’s LAN Access Division, said vSphere 5 customers running Intel’s supported adapters don’t have to add specialized Converged Network Adapters (CNAs) to their networks. He said the concept is similar to Microsoft’s adding the iSCSI initiator to its stack in the early days of iSCSI, eliminating the need for TCP/IP offload engine (TOE) cards.

“We’ve seen that model be successful with iSCSI, and we’re taking the same steps now with FCoE,” he said. “Once you get it native in a kernel, it comes as a feature in the operating system and frees up the network card to be purely a network card.”

FCoE adoption has been slow, but Ahluwalia said he expects it to pick up after 10 GbE becomes dominant in networks. “Customers are looking at moving to 10-gig first,” he said. “As they roll out their next infrastructure to 10-gig and a unified network, FCoE and iSCSI will be additional benefits.” …

The primary data reduction landscape should start heating up soon. Besides Symantec adding primary dedupe to Storage Foundation, IBM and Dell are close to integrating dedupe and compression technologies they picked up through acquisitions last year.

A source from IBM said it will announce integration with Storwize compression on multiple IBM storage systems this fall, and Dell is planning to do the same with its Ocarina deduplication technology over the coming months.

In a recent interview with SearchStorage.com and Storage magazine editors, Dell storage VP Darren Thomas said Dell products using Ocarina’s dedupe technology will start showing up late this year with more coming in 2012.

“We’ve been integrating Ocarina,” he said. “It will start appearing in multiple places. You’ll see a [product] drop this year and more than likely a couple more next year.”

Sneak peaks of EMC’s Project Lightning server-side PCIe flash cache product showed up in several EMC-hosted VMWorld sessions. The product appeared in demos and tech previews, and EMC VP of VMware Strategic Alliance Chad Sakac said it will be in beta soon and scheduled for general availability by the end of the year. EMC first discussed Project Lightning at EMC World in May but gave no shipping date.


September 1, 2011  9:12 PM

Imation removes ProStor, buys InfiniVault assets

Dave Raffo Dave Raffo Profile: Dave Raffo

RDX removable disk pioneer ProStor Systems officially dissolved this week when Imation picked up ProStor’s remaining intellectual property. That consists of mostly ProStor’s InfiniVault data management technology, which  Imation plans to build a tiered storage strategy around.

Tandberg acquired the RDX business from ProStor in May, and Imation scooped up the InfiniVault Management System software this week. ProStor’s InfiniVault systems included the software with RDX removable disk drives and cartridges.

Imation licensed RDX technology from ProStor and now licenses it from Tandberg, so it will continue the InfiniVault line and look to expand it. At least that’s the plan now, according to Imation VP of marketing and product management Ian Williams.

Imation is also hiring about 15 ProStor engineering, sales, marketing and support employees.

Williams said he expects to have more to say about the InfiniVault roadmap in a few months, but he expects it to become a major piece of what he calls Imation’s secure and scalable storage line. Imation’s other storage product are tape/optical drives and audio-video home systems.

“We’re transferring from a media-centric company to a company that provides tiered storage across multiple media types,” Williams said. “This is a solid platform for us to build on.”

InfiniVault systems see data on RDX drives as a NAS file system. The software adds retention, WORM, encryption, deduplication, compression, indexing, and digital fingerprinting features. Williams said having these capabilities on removable drives makes for a valuable alternative to using the cloud for DR and quick restores.

“Portable storage is the next stop for customers who want tiered storage,” Williams said. “RDX is the most effective way of doing that outside of the cloud. If you have multi-terabytes of data, the cloud is a great way for incremental archives and offsite retrieval, but with bare metal restores you need something faster with more bandwidth. That’s where RDX fits in.”

Williams said the deal gives Imation tiering IP much sooner than if it tried to build it internally. He said Imation did not try to buy the RDX end of ProStor because it already has access to the technology as a licensee.

Now Imation’s challenge is to do what ProStor failed to do – turn InfiniVault into a successful business.

“It’s a matter of focus,” he said. “ProStor was an RDX company for years, and that takes focus and funding to do. It’s hard to be an RDX company and a tiered storage company.”


August 28, 2011  11:58 PM

VMWorld is a major storage event

Randy Kerns Randy Kerns Profile: Randy Kerns

More than 70 storage hardware and software vendors will be exhibiting at VMWorld. That number verifies that VMWorld has become one of the major storage events where storage vendors choose to show their products and meet with customers, press, and analysts. These events represent a huge investment for vendors, and preparation for an event includes logistics and orchestration of many dynamic elements:

• Demonstration booths – the design, construction, transportation of the booths have the same characteristics of preparing a NASCAR team for race day.

• Staffing – getting the correct people that can speak to products, support the systems, meet with the press, analysts, and customers coordinated is almost an exercise in queuing theory.

• Equipment – the latest systems to be shown (in pristine condition) need to be ready and sent to the event. Also they will need to be set up. If demonstration labs are required, support systems and infrastructure must be there. Seemingly simple things such as sufficient power and the right types of power connectors can cause major problems without proper preparation.

• Briefing staff and executives – preparing for meetings with press, analysts, and customers requires that the scripted messages be prepared and ensuring everyone is briefed and ready.

• Arranging meetings – analysts especially have a high demand on their time and coordinating meetings is like putting an odd-sized puzzle together. Lead time is crucial to ensuring the right executives are speaking with analysts.

From our perspective as an analyst firm, VMWorld represents a valuable opportunity to meet with vendors’ executives to understand their strategies and translate the vendor information into useful analysis for our IT customers. The importance of VMWorld can be measured by the number of requests for meetings that we receive – more than can be scheduled and certainly more than be absorbed.

Like vendors, there are a limited number of events that we will invest our time into attending and preparing for. VMWorld has become one of these events, underlining the important role played by storage in server virtualization, as I discussed in a previous blog.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


August 24, 2011  12:48 PM

VMware and the importance of storage

Randy Kerns Randy Kerns Profile: Randy Kerns

With VMWorld approaching, vendors have already made announcements regarding storage and VMware with many to follow after the show begins next week.

These announcements come mostly from storage vendors focused on the way storage is used in VMware environments. The volume of announcements highlight how critically important storage is in the world of server virtualization. The cost of storage can eclipse the capital savings from server virtualization if important issues are not correctly addressed during the virtualization projects.

Many storage issues can arise if there is a lack of adequate planning for server virtualization. A Virtual Desktop Infrastructure (VDI) can dramatically exacerbate issues with storage. Evaluator Group articles have looked at some of the potential problems with storage and virtualization, including: 

• Wide striping across many physical disks spreads out I/Os for virtual machines to compensate for the reduced number of drives caused by the consolidation of physical servers into virtual machines.
• Use of solid state devices for tiered storage or for tiered caching of data to provide electronic speeds for accessing highly active data.
• Exploitation of storage system features such as writeable clones (snapshots) and remote replication.
• Advanced storage system features working with VMware enablements such as VMware vStorage APIs for Array Integration (VAAI) .
• Thin provisioning of volumes to minimize trapped capacity. Space reclamation is required to maintain “thinness.”

New enhancements to VMware include better integration of VMware and storage systems/software in the areas of Disaster Recovery/Business Continuance (BC/DR) and backups. These integrations represent opportunities to improve operations and efficiency, but will change workflows in most cases for the virtualized environment.

While these are great advances, IT operations will still have non-virtualized (physical) servers. That means there will be operational differences in these areas. IT shops on average have virtualized less than half of their environments, indicating that a bifurcated workflow strategy will persist for some time.

An ongoing area of improvement between VMware and storage is in the area of administration. There is a fundamental change underway regarding who manages the storage. Tools provided for virtualization allow non-storage administrators to do more of the storage provisioning required when they create virtual machines.

Additional management integration will be announced at VMWorld and improvements will continue. The administration, like the integration of BC/DR and backup capabilities, will likely be different between virtualized servers and non-virtualized servers and will continue to be that way for some time.

Server virtualization has been a major shift in IT operations and has brought a critical focus on storage. The focus and the parade of improvements will continue for some time, as will the changes in how it all gets managed.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


August 22, 2011  2:35 PM

Dell adds 2.5-inch SAS, MLC SSDs to EqualLogic

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell revamped its EqualLogic iSCSI SAN hardware today, adding new entry-level and midrange platforms with support for 2.5-inch SAS drives and multi-level cell (MLC) solid state drives (SSDs) for the first time. The new PS6100 and PS4100 lines support the 5.1 firmware Dell launched in June.

The PS6100 series comes in 2U and 4U configurations, and scales to 72 TB in one array and 1.2 PB in a group of 16 systems. The 2U system hold 24 2.5-inch drives to scale for a maximum capacity of 21.6 TB and the 3U systems hold 24 3.5-inch drives for 72 TB.

The PS6100 series supports 2.5-inch and 3.5-inch performance and capacity SAS drives, as well as to 24 400 GB SSDs in the 6100S and seven 400 GB SSDs in the 6100XS models. Dell is using Pliant Technology (now part of SanDisk) MLC drives in the PS6100 family.

The PS4100 series boxes are all 2U models, holding either 24 2.5-inch drives for 21.6 TB or 12 3.5-inch drives for 36 TB. The PS4100 supports performance and capacity SAS drives, but not SSDs. The PS4100 only supports two systems in one group.

“The big difference in the two platforms is scalability,” said Travis Vigil, executive director of Dell Storage.

The PS6100 and PS4100 will eventually replace the PS6000 and PS4000, although customers can mix nodes from the new and old platforms in the same virtual storage pool.

The 5.1 firmware handles tiering and load balancing that can help manage SSDs by moving data based on access patterns, Vigil said. Although EqualLogic has been offering single-level cell (SLC) SSDs in the PS6000 line since 2009, Vigil said less than 10% of EqualLogic systems ship with SSDs. “We’re seeing that our customers don’t need a lot of SSDs, but SSDs gives a nice performance boost for those who do need them,” he said.

Pricing starts at $9,499 for the PS4100 and $30,699 for the PS6100.

The EqualLogic launch comes as Dell continues to transition from an EMC OEM partner to selling its own storage mostly around the acquisitions of EqualLogic and Compellent. EqualLogic sales actually dropped last quarter from the previous year according to Dell’s earnings report, as Dell execs blamed the sales decrease on a supply chain issue that has been fixed and customers waiting for the new platform. Dell maintains that EqualLogic is still the iSCSI SAN market leader, however.


August 19, 2011  3:57 PM

Political, economic uncertainty hits storage

Dave Raffo Dave Raffo Profile: Dave Raffo

The debt ceiling crisis and market uncertainty have impacted storage sales – particularly in the government and financial services markets – leaving storage executives wondering if the buying decline is temporary or will be long-lasting.

Two of the largest storage vendors — NetApp and Brocade — this week reported disappointing financial results for the quarter that ended July 29. Their executives used terms like “IT headwinds” and “macroeconomic factors” that suggest the problems were beyond their control and part of a larger financial picture.

NetApp’s revenue of $1.46 billion and forecast of $1.61 billion were both below analysts’ expectations, fueled by a recently optimistic analyst day held by the vendor. At least NetApp’s revenue grew year over year — Brocade reported storage switch sales fell six percent from last year.

Unlike most storage vendors, NetApp and Brocade’s quarter ended in July instead of June, so they got hit by the chaos around the debt ceiling debate in Congress that led to a roller-coaster stock market.
“Headwinds in the IT market, federal spending and the overall global economy made for a challenging quarter for the company,” Brocade CEO Mike Klayko said on his company’s earnings call. “The storage business is not immune to macro IT factors. Fluctuation in demand levels is normal and to be expected, particularly in this period of heightened economic uncertainty.”

NetApp said sales were strong last quarter until falling off a cliff in July. Executives blamed the debt ceiling crisis and “macroeconomic uncertainty,” saying federal government agencies and financial services were hit particularly hard.

NetApp CEO Tom Georgens said six of its 23 largest accounts are financial services companies, and all six had booking declines from the same quarter last year. He said that led him to believe NetApp’s sales decline was caused by overall economic factors rather than gains by competitors.

“We exploded out of April, we closed last quarter exceptionally strong,” Georgens said. “May was very strong, so there was no evidence that we had drained the swamp. And June was strong, so we were rolling. We were ahead of our forecast, and we felt really, really good about where we were. What we didn’t expect is the U.S. side of the house weakened as the quarter wore on. And financial services … the fact that all six of them in our major accounts program was down is an indicator that something’s going on there that I don’t think is specific to NetApp.”

Georgens said he doesn’t think the downturn will last as long as the one that began in late 2008, but he’s not sure of that.

“I don’t feel like we’re on the trajectory that we were in a couple years back,” he said. “I may feel that way 90 days from now, but it doesn’t yet feel that way today. This government thing — I don’t know how much the political overhang is a factor here, and we’ll just see what happens. But right now, we’re just going to assume that the current environment is going to stay roughly at this level going forward, and we’ll see where it goes from there.”

Brocade executives said they expect storage – particularly Fibre Channel SANs – to rebound because demand remains strong. Klayko said Brocade’s annual customer survey this year found that 80% of its storage customers said they expected to grow or maintain their FC switch spending over the next three years.

The vendor is starting to push its 16 Gbps technology, claiming there is demand for more bandwidth for applications such as virtual desktop infrastructure (VDI) and analytics. Klayko said Hewlett-Packard, IBM, EMC, Hitachi Data Systems and Fujitsu Technology Systems are already selling Brocade’s 16-gig switches.

“The buying dynamic continues to be very strong for Fibre Channel,” Brocade CTO Dave Stevens said. “It continues to be the dominant technology in the data center for pooled storage environments.”
Of course, demand doesn’t always turn into implementation – as NetApp and Brocade discovered last quarter.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: