By Sonia Lelii, Senior News Editor
Hitachi Data Systems earlier today announced it has scooped up its OEM partner BlueArc for $600 million, and hours after the news broke, not many seemed to be taken aback by the acquisition that gives HDS its own NAS platform.
LAS VEGAS – Storage-related notes from this week’s VMWorld 2011:
Symantec has done a lot lately to try and catch up to smaller rivals on virtual machine support for its backup products. Now it is ready to support virtualization in its storage management products, as well as data deduplication for primary data.
Symantec is preparing to give Storage Foundation its first major release of in five years. The main focus for the overdue upgrade will be support for mixed environments, which means virtual servers and the cloud as well as physical servers.
The Storage Foundation 6 launch will come in a month or two, but Symantec senior VP of storage management Anil Chakravarthy filled me in on a few details. He said the goal is to allow customers to run Storage Foundation in any operating system and on any hypervisor.
“We’re taking the existing products [in Storage Foundation suite] and orienting them to mixed environments,” he said. “Customers can mix and match the applications on a combination of platforms.”
Symantec is moving ApplicationHA into the Storage Foundation suite and that will also get an upgrade, along with Cluster File System and Veritas Operations Manager. ApplicationHA has been a standalone product until now.
Chakravarthy said Storage Foundation will enable primary dedupe at the file system level, and work with any NAS or SAN systems. He also claims Symantec will get more granularity than storage array vendors who have or are adding primary dedupe.
“We’ve had it in our backup products,” Chakravarthy said. “Now we’ve taken the dedupe engine and built it into the file system for primary data. Putting it at the file system level gives us granularity that you cannot have from deduping at the array level.”
One area that Symantec is staying out of for now is sub-LUN automated tiering. Storage Foundation already has what it calls Smart Tiering at the LUN level, but Chakravarthy said sub-LUN tiering is best handled at the array. …
One of the less publicized features in vSphere 5 is native support in ESX of Intel’s Open Fibre Channel over Ethernet (FCoE). Naturally, Intel claims this support is a big deal.
Intel announced Open FCoE last January, claiming it will do for FCoE what software initiators did for iSCSI. That is, it will enable FCoE on standard NICs without additional hardware adapters. VMware vSphere 5 customers can use a standard 10-Gigabit Ethernet (GbE) adapter for FCoE connectivity instead of a more costly Converged Network Adapter (CNA). Intel supports Open FCoE on its Ethernet Server Adapter X520 and 82599 10 Gb Ethernet Controller cards.
Intel’s approach to FCoE requires key partners to support its drivers. Windows and Linux operating systems support FCoE, but earlier versions of vSphere did not. Sunil Ahluwalia, senior producct line manager for Intel’s LAN Access Division, said vSphere 5 customers running Intel’s supported adapters don’t have to add specialized Converged Network Adapters (CNAs) to their networks. He said the concept is similar to Microsoft’s adding the iSCSI initiator to its stack in the early days of iSCSI, eliminating the need for TCP/IP offload engine (TOE) cards.
“We’ve seen that model be successful with iSCSI, and we’re taking the same steps now with FCoE,” he said. “Once you get it native in a kernel, it comes as a feature in the operating system and frees up the network card to be purely a network card.”
FCoE adoption has been slow, but Ahluwalia said he expects it to pick up after 10 GbE becomes dominant in networks. “Customers are looking at moving to 10-gig first,” he said. “As they roll out their next infrastructure to 10-gig and a unified network, FCoE and iSCSI will be additional benefits.” …
The primary data reduction landscape should start heating up soon. Besides Symantec adding primary dedupe to Storage Foundation, IBM and Dell are close to integrating dedupe and compression technologies they picked up through acquisitions last year.
A source from IBM said it will announce integration with Storwize compression on multiple IBM storage systems this fall, and Dell is planning to do the same with its Ocarina deduplication technology over the coming months.
In a recent interview with SearchStorage.com and Storage magazine editors, Dell storage VP Darren Thomas said Dell products using Ocarina’s dedupe technology will start showing up late this year with more coming in 2012.
“We’ve been integrating Ocarina,” he said. “It will start appearing in multiple places. You’ll see a [product] drop this year and more than likely a couple more next year.”
Sneak peaks of EMC’s Project Lightning server-side PCIe flash cache product showed up in several EMC-hosted VMWorld sessions. The product appeared in demos and tech previews, and EMC VP of VMware Strategic Alliance Chad Sakac said it will be in beta soon and scheduled for general availability by the end of the year. EMC first discussed Project Lightning at EMC World in May but gave no shipping date.
RDX removable disk pioneer ProStor Systems officially dissolved this week when Imation picked up ProStor’s remaining intellectual property. That consists of mostly ProStor’s InfiniVault data management technology, which Imation plans to build a tiered storage strategy around.
Tandberg acquired the RDX business from ProStor in May, and Imation scooped up the InfiniVault Management System software this week. ProStor’s InfiniVault systems included the software with RDX removable disk drives and cartridges.
Imation licensed RDX technology from ProStor and now licenses it from Tandberg, so it will continue the InfiniVault line and look to expand it. At least that’s the plan now, according to Imation VP of marketing and product management Ian Williams.
Imation is also hiring about 15 ProStor engineering, sales, marketing and support employees.
Williams said he expects to have more to say about the InfiniVault roadmap in a few months, but he expects it to become a major piece of what he calls Imation’s secure and scalable storage line. Imation’s other storage product are tape/optical drives and audio-video home systems.
“We’re transferring from a media-centric company to a company that provides tiered storage across multiple media types,” Williams said. “This is a solid platform for us to build on.”
InfiniVault systems see data on RDX drives as a NAS file system. The software adds retention, WORM, encryption, deduplication, compression, indexing, and digital fingerprinting features. Williams said having these capabilities on removable drives makes for a valuable alternative to using the cloud for DR and quick restores.
“Portable storage is the next stop for customers who want tiered storage,” Williams said. “RDX is the most effective way of doing that outside of the cloud. If you have multi-terabytes of data, the cloud is a great way for incremental archives and offsite retrieval, but with bare metal restores you need something faster with more bandwidth. That’s where RDX fits in.”
Williams said the deal gives Imation tiering IP much sooner than if it tried to build it internally. He said Imation did not try to buy the RDX end of ProStor because it already has access to the technology as a licensee.
Now Imation’s challenge is to do what ProStor failed to do – turn InfiniVault into a successful business.
“It’s a matter of focus,” he said. “ProStor was an RDX company for years, and that takes focus and funding to do. It’s hard to be an RDX company and a tiered storage company.”
More than 70 storage hardware and software vendors will be exhibiting at VMWorld. That number verifies that VMWorld has become one of the major storage events where storage vendors choose to show their products and meet with customers, press, and analysts. These events represent a huge investment for vendors, and preparation for an event includes logistics and orchestration of many dynamic elements:
• Demonstration booths – the design, construction, transportation of the booths have the same characteristics of preparing a NASCAR team for race day.
• Staffing – getting the correct people that can speak to products, support the systems, meet with the press, analysts, and customers coordinated is almost an exercise in queuing theory.
• Equipment – the latest systems to be shown (in pristine condition) need to be ready and sent to the event. Also they will need to be set up. If demonstration labs are required, support systems and infrastructure must be there. Seemingly simple things such as sufficient power and the right types of power connectors can cause major problems without proper preparation.
• Briefing staff and executives – preparing for meetings with press, analysts, and customers requires that the scripted messages be prepared and ensuring everyone is briefed and ready.
• Arranging meetings – analysts especially have a high demand on their time and coordinating meetings is like putting an odd-sized puzzle together. Lead time is crucial to ensuring the right executives are speaking with analysts.
From our perspective as an analyst firm, VMWorld represents a valuable opportunity to meet with vendors’ executives to understand their strategies and translate the vendor information into useful analysis for our IT customers. The importance of VMWorld can be measured by the number of requests for meetings that we receive – more than can be scheduled and certainly more than be absorbed.
Like vendors, there are a limited number of events that we will invest our time into attending and preparing for. VMWorld has become one of these events, underlining the important role played by storage in server virtualization, as I discussed in a previous blog.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
With VMWorld approaching, vendors have already made announcements regarding storage and VMware with many to follow after the show begins next week.
These announcements come mostly from storage vendors focused on the way storage is used in VMware environments. The volume of announcements highlight how critically important storage is in the world of server virtualization. The cost of storage can eclipse the capital savings from server virtualization if important issues are not correctly addressed during the virtualization projects.
Many storage issues can arise if there is a lack of adequate planning for server virtualization. A Virtual Desktop Infrastructure (VDI) can dramatically exacerbate issues with storage. Evaluator Group articles have looked at some of the potential problems with storage and virtualization, including:
• Wide striping across many physical disks spreads out I/Os for virtual machines to compensate for the reduced number of drives caused by the consolidation of physical servers into virtual machines.
• Use of solid state devices for tiered storage or for tiered caching of data to provide electronic speeds for accessing highly active data.
• Exploitation of storage system features such as writeable clones (snapshots) and remote replication.
• Advanced storage system features working with VMware enablements such as VMware vStorage APIs for Array Integration (VAAI) .
• Thin provisioning of volumes to minimize trapped capacity. Space reclamation is required to maintain “thinness.”
New enhancements to VMware include better integration of VMware and storage systems/software in the areas of Disaster Recovery/Business Continuance (BC/DR) and backups. These integrations represent opportunities to improve operations and efficiency, but will change workflows in most cases for the virtualized environment.
While these are great advances, IT operations will still have non-virtualized (physical) servers. That means there will be operational differences in these areas. IT shops on average have virtualized less than half of their environments, indicating that a bifurcated workflow strategy will persist for some time.
An ongoing area of improvement between VMware and storage is in the area of administration. There is a fundamental change underway regarding who manages the storage. Tools provided for virtualization allow non-storage administrators to do more of the storage provisioning required when they create virtual machines.
Additional management integration will be announced at VMWorld and improvements will continue. The administration, like the integration of BC/DR and backup capabilities, will likely be different between virtualized servers and non-virtualized servers and will continue to be that way for some time.
Server virtualization has been a major shift in IT operations and has brought a critical focus on storage. The focus and the parade of improvements will continue for some time, as will the changes in how it all gets managed.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Dell revamped its EqualLogic iSCSI SAN hardware today, adding new entry-level and midrange platforms with support for 2.5-inch SAS drives and multi-level cell (MLC) solid state drives (SSDs) for the first time. The new PS6100 and PS4100 lines support the 5.1 firmware Dell launched in June.
The PS6100 series comes in 2U and 4U configurations, and scales to 72 TB in one array and 1.2 PB in a group of 16 systems. The 2U system hold 24 2.5-inch drives to scale for a maximum capacity of 21.6 TB and the 3U systems hold 24 3.5-inch drives for 72 TB.
The PS6100 series supports 2.5-inch and 3.5-inch performance and capacity SAS drives, as well as to 24 400 GB SSDs in the 6100S and seven 400 GB SSDs in the 6100XS models. Dell is using Pliant Technology (now part of SanDisk) MLC drives in the PS6100 family.
The PS4100 series boxes are all 2U models, holding either 24 2.5-inch drives for 21.6 TB or 12 3.5-inch drives for 36 TB. The PS4100 supports performance and capacity SAS drives, but not SSDs. The PS4100 only supports two systems in one group.
“The big difference in the two platforms is scalability,” said Travis Vigil, executive director of Dell Storage.
The PS6100 and PS4100 will eventually replace the PS6000 and PS4000, although customers can mix nodes from the new and old platforms in the same virtual storage pool.
The 5.1 firmware handles tiering and load balancing that can help manage SSDs by moving data based on access patterns, Vigil said. Although EqualLogic has been offering single-level cell (SLC) SSDs in the PS6000 line since 2009, Vigil said less than 10% of EqualLogic systems ship with SSDs. “We’re seeing that our customers don’t need a lot of SSDs, but SSDs gives a nice performance boost for those who do need them,” he said.
Pricing starts at $9,499 for the PS4100 and $30,699 for the PS6100.
The EqualLogic launch comes as Dell continues to transition from an EMC OEM partner to selling its own storage mostly around the acquisitions of EqualLogic and Compellent. EqualLogic sales actually dropped last quarter from the previous year according to Dell’s earnings report, as Dell execs blamed the sales decrease on a supply chain issue that has been fixed and customers waiting for the new platform. Dell maintains that EqualLogic is still the iSCSI SAN market leader, however.
The debt ceiling crisis and market uncertainty have impacted storage sales – particularly in the government and financial services markets – leaving storage executives wondering if the buying decline is temporary or will be long-lasting.
Two of the largest storage vendors — NetApp and Brocade — this week reported disappointing financial results for the quarter that ended July 29. Their executives used terms like “IT headwinds” and “macroeconomic factors” that suggest the problems were beyond their control and part of a larger financial picture.
NetApp’s revenue of $1.46 billion and forecast of $1.61 billion were both below analysts’ expectations, fueled by a recently optimistic analyst day held by the vendor. At least NetApp’s revenue grew year over year — Brocade reported storage switch sales fell six percent from last year.
Unlike most storage vendors, NetApp and Brocade’s quarter ended in July instead of June, so they got hit by the chaos around the debt ceiling debate in Congress that led to a roller-coaster stock market.
“Headwinds in the IT market, federal spending and the overall global economy made for a challenging quarter for the company,” Brocade CEO Mike Klayko said on his company’s earnings call. “The storage business is not immune to macro IT factors. Fluctuation in demand levels is normal and to be expected, particularly in this period of heightened economic uncertainty.”
NetApp said sales were strong last quarter until falling off a cliff in July. Executives blamed the debt ceiling crisis and “macroeconomic uncertainty,” saying federal government agencies and financial services were hit particularly hard.
NetApp CEO Tom Georgens said six of its 23 largest accounts are financial services companies, and all six had booking declines from the same quarter last year. He said that led him to believe NetApp’s sales decline was caused by overall economic factors rather than gains by competitors.
“We exploded out of April, we closed last quarter exceptionally strong,” Georgens said. “May was very strong, so there was no evidence that we had drained the swamp. And June was strong, so we were rolling. We were ahead of our forecast, and we felt really, really good about where we were. What we didn’t expect is the U.S. side of the house weakened as the quarter wore on. And financial services … the fact that all six of them in our major accounts program was down is an indicator that something’s going on there that I don’t think is specific to NetApp.”
Georgens said he doesn’t think the downturn will last as long as the one that began in late 2008, but he’s not sure of that.
“I don’t feel like we’re on the trajectory that we were in a couple years back,” he said. “I may feel that way 90 days from now, but it doesn’t yet feel that way today. This government thing — I don’t know how much the political overhang is a factor here, and we’ll just see what happens. But right now, we’re just going to assume that the current environment is going to stay roughly at this level going forward, and we’ll see where it goes from there.”
Brocade executives said they expect storage – particularly Fibre Channel SANs – to rebound because demand remains strong. Klayko said Brocade’s annual customer survey this year found that 80% of its storage customers said they expected to grow or maintain their FC switch spending over the next three years.
The vendor is starting to push its 16 Gbps technology, claiming there is demand for more bandwidth for applications such as virtual desktop infrastructure (VDI) and analytics. Klayko said Hewlett-Packard, IBM, EMC, Hitachi Data Systems and Fujitsu Technology Systems are already selling Brocade’s 16-gig switches.
“The buying dynamic continues to be very strong for Fibre Channel,” Brocade CTO Dave Stevens said. “It continues to be the dominant technology in the data center for pooled storage environments.”
Of course, demand doesn’t always turn into implementation – as NetApp and Brocade discovered last quarter.
Starting in late August, storage vendors will be making more product announcements than usual. These launches represent updates, next generations, or completely new products. Some announcements are coordinated with major industry events to give the vendors a venue to speak about and demonstrate the products. The early fall dates get the messages out at a time when IT professionals have returned from vacations (assuming they even find time to take vacation) and the purchasing/budget cycle for most companies is approaching year-end.
After the initial introduction of a product, its further development has a cadence built around development and test cycles to deliver new functions and provide updates or incorporations of fixes. This cadence is established around the cycles for test and release of new versions. The delivery cycle is generally six-months long for new functions or versions, with a quarterly minor update to include fixes for problems or issues discovered. Vendors need to take into account that an increase in frequency of releases increases the disruption for customers and internal support organizations.
There have been notable, misguided exceptions to this where hard lessons had to be learned by those who had not been through this before.
Major new product announcements start based on the product readiness (at least the hoped for readiness) for market. Even large mature companies sometimes blunder by announcing new products at times that guarantee to minimize the attention. One major product last year launched right before Thanksgiving.
Holidays are a bad time to get the attention of IT professionals. And the end of the year is the worst time to bring out a new product, whether it is sold directly or through the channel by Value Added Resellers (VARs). Sales people have quotas to make, and that is done with a product that has been introduced and promoted to customers. It is not done with a brand new product that must be explained and takes time for reference accounts to build.
The announcement season is upon us, and these announcements represent the planning, marketing, and engineering that went into a product. These are major undertakings by companies and the execution of the announcement including the timing, the venues, the supporting materials and the product delivery are critical for the success of companies. The announcement season isn’t just interesting — it’s important on a number of levels and impacts the success of a vendor and its individual products. Evaluating the products and how they are delivered could determine the continuation of a product line. That could affect a vendor’s reputation, and could even make-or-break smaller companies.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Xiotech is changing its name to X-IO (pronounced X-I-O) as part of an overhaul that really started in 2008 when it launched its ISE architecture and accelerated with the appointment of Alan Atkinson as CEO in late 2009.
The name change becomes official next week, but Atkinson is briefing people in the industry about the move. With the new name, X-IO will sharpen its focus on solid state storage and gear up for a possible run at going public.
The vendor will continue to build on its ISE architecture, and is renaming its SSD-based ISE system Hyper ISE. The system first launched earlier this year as Hybrid ISE.
“XIO sounds like extreme IO, which is exactly where we want to be with our concentration on performance-driven storage,” Atkinson said.
Atkinson said Hyper ISE “is the same physical product” as Hybrid ISE, but its performance has been jacked up considerably due to improvements in the firmware and the algorithms used to tier data. XIO claims one Hyper ISE system can deliver 200,000 IOPS in a 14.4 TB, 3U box.
Hyper ISE uses multi-level cell (MLC) SSDs and 10,000 RPM SAS drives in one enclosure. Xio sees the system as a good fit for running databases, virtual servers and virtual desktop infrastructures (VDIs).
“We built Hyper ISE for performance-starved apps,” Atkinson said. “This is the next turn of the crank. We were talking about 60,000 IOPS before, but 200,000 IOPS is an awful lot better.”
Even with ISE as its centerpiece, X-IO is a different company than when Atkinson replaced Casey Powell as CEO. Senior management now includes industry veterans COO George Symons and chief strategy officer Jim McDonald. Like Atkinson, both have worked at EMC. The notable holdover is CTO Steve Sicola, who came to Xiotech when it acquired the Advanced Storage Architecture group from Seagate and turned its technology into the ISE platform.
The founders of Zerto are hoping to replicate the success they had with Kashya Networks, and they have $15 million in new funding to help fuel their plans.
Zerto was founded by the Kedem brothers, CEO Ziv and CTO Oded. They sold Kashya to EMC for $153 million in 2006. That turned into a good deal for EMC, which has had success with the RecoverPoint fabric-based replication product it got from Kashya. Zerto takes a different approach to replication. Instead of fabric or host-based replication for applications, Zerto Virtual Replication is a virtual appliance designed to work with VMware virtual machines.
On Monday, Zerto closed a B series funding round led by U.S. Venture Partners with earlier investors Battery Ventures and Greylock Partners participating. The round brings Zerto’s total funding to $21 million.
Ziv Kedem said the rise of server virtualization and the cloud have changed the face of replication for disaster recovery, prompting a shift in focus from physical devices to the hypervisor. Zerto positions Virtual Replication as a method of protection for VMs and applications for enterprise, or as the basis of DR as a service. It replicates specific VMs regardless of their LUNs, works with any storage array and features one-click recovery and WAN compression.
“The thing that’s changed from 2006 to today is the massive disruption of virtualization and the cloud,” Kedem said. “With a physical environment, storage was the center of the data center. Virtual machines have changed that and with the cloud, users just want to manage their applications. They don’t care where they are.”
Kedem said Zerto has about 20 customers in an extended beta program, including cloud providers offering DR as a service. The Virtual Replication product went GA this month. He said the startup will use the new funding to expand its sales and marketing. Kedem said he expects to grow the company from 30 employees today to about 50 by the end of the year.