By Sonia R. Lelii, Senior News Writer
EMC president Pat Gelsinger said EMC had already moved on by the time Dell officially ended their storage partnership last week after a 10-year relationship.
Gelsinger said it was no secret that EMC’s partnership with Dell had to drastically change or end after Dell expanded its storage presence by acquiring EMC competitors EqualLogic and Compellent.
“It got to a natural point where the relationship had to be restructured or it had to come to an end. Unfortunately, it came to an end,” Gelsinger told a group of reporters last Thursday at EMC Forum 2011, held at Gillette Stadium in Foxborough, Mass.
Dell sold EMC’s Clariion, Celerra, Data Domain and VNX systems through OEM and reseller deals, with the bulk of the revenue generated from Clariion midrange SAN sales. Dell will also no longer manufacture EMC’s low-end Clariion.
Dell’s revenue sales for EMC’s channel have been sliding downward since last year, Gelsinger said. EMC reported Dell-generated revenue of $55 million in the fourth quarter of 2010, and that fell to under $40 million in the first quarter of this year. EMC has not given a figure for Dell revenue since then, but its executives said its non-Dell channel sales for the mid-tier increased 44% year-over-year in the third quarter of this year.
EMC has built up its channel this year, making the SMB VNXe product a channel-only offering that directly competes with Dell products. Earlier this month, EMC launched a channel-only Data Domain DD160 SMB system.
EMC has also continued to upgrade its VNX midrange platform. Last week it launched an all-flash model (VNX5500-F) as well as a high-bandwidth VNX5500 option with four extra 6 Gbps SAS ports, and support for 3 TB SAS drives throughout the VNX family.
“Now that we are no longer continuing forward [with Dell], we have to do it ourselves,” Gelsinger said. “It’s a clear, simple focus on our part.”
Dell began selling EMC storage in 2001, and in late 2008 the vendors said they were extending their OEM agreement through 2013. Dell also widened the deal in March 2010 by adding EMC Celerra NAS and Data Domain deduplication backup appliances to their OEM arrangement. However, the relationship had already started to deteriorate by then, going back to when Dell acquired EqualLogic in early 2008.
The rift became irreparable last year when Dell followed an unsuccessful bid for 3PAR by completing an $820 million acquisition of Compellent in December.
By Sonia R. Lelii, Senior News Writer
Symantec Corp. today upgraded its FileStore N8300 clustered network-attached storage (NAS) appliance, adding deduplication for primary storage, metro-clustering and cascading replication, and cloning of VMware images.
FileStore N8300 5.7 now can be used as a storage target in virtual machine environments, where a file-level cloning feature is used to create a golden image and users can clone that image into thousands of VMDK files, said Yogesh Agrawal, Symantec’s vice president and general manager for the FileStore Product Group. Symantec also leveraged code from Veritas Cluster File System to create a deduplication module in the FileStore appliance to reduce redundancies in the VMDK files.
The NAS device also now supports metro-clustering replication for disaster recovery that automates the process of bringing up the disaster recovery site when the primary site goes down. Previously, the disaster recovery site had to be made live manually. Metro-clustering is based on synchronous volume mirroring and the cluster limitation is 100 kilometers. Cascading replication now allows replication to be done from a secondary to a tertiary site. “In that scenario, we can do synchronous replication,” Agrawal said.
FileStore starts with about 10 TB of capacity and can scale to 1.4 PB. A common customer configuration is a two-node, 24 TB system that has a list price of $69,796.
Dell today officially ended its 10-year partnership with EMC, saying it would no longer sell EMC products after making a series of storage acquisitions over the past four years.
Customers who purchased EMC storage from Dell will continue to receive support, Dell said in a statement, but it is ending its OEM and reseller deals for EMC Clariion, Celerra, Data Domain and VNX systems.
The move isn’t much of a surprise, considering Dell had already driven a large wedge into the relationship by buying its own storage companies – including several direct competitors to EMC.
Dell has sold EMC storage since 2001, and in late 2008 the vendors said they were extending their OEM agreement through 2013. Dell also widened the deal in March of 2010 by adding EMC Celerra NAS and Data Domain deduplication backup appliances to their OEM arrangement. However, the relationship had already started to deteriorate by then, going back to when Dell acquired EMC competitor EqualLogic in early 2008. The rift became irreparable last year when Dell followed an unsuccessful bid for 3PAR by completing an $820 million acquisition of Compellent in December.
Dell also acquired data reduction vendor Ocarina and the assets of scale-out NAS vendor Exanet in 2010, giving it more storage IP to integrate with its platforms.
Even before Dell bought Compellent, EMC CEO Joe Tucci a year ago said the once tight relationship between the vendors “cooled off” after Dell tried to buy 3PAR. A Dell spokesman responded by saying EMC still played an important role in Dell’s storage strategy.
There has been no OEM deal for EMC’s VNX unified storage system launched last January, although Dell did have an OEM deal for the Clariion and Celerra platforms that VNX replaced. EMC has built up its channel this year, making the SMB VNXe product a channel-only offering that directly competes with Dell products. Last week EMC launched a channel-only Data Domain DD160 SMB system.
We hear a lot these days about tiering and caching in storage systems. These are not the same thing. Some systems implement tiering across types of media, while others cache data into a solid-state device as transient storage. Other storage systems have both capabilities.
IT professionals may wonder what the differences between tiering and caching are, and whether they need to tier or cache data. There are clear differences, but the performance implications between the approaches vary primarily based on the specific storage system implementation.
Tiering storage systems use different devices such as solid-state devices, high-performance disks, and high-capacity disks. Each of these device types make up a tier. The systems intelligently move data between the tiers based on patterns of access — a process known as automated tiering.
Tiering greatly increases the overall system performance, with access to the most active data coming from the highest performance devices. The higher performance allows the systems to support more demanding applications. Tiering also lets an organization get by with smaller amounts of the most expensive types of storage by moving less frequently accessed data to cheaper drives.
Caching systems use memory or solid state to store highly active data as transient data that may be accessed from the higher performing technology. The caching data resides in a permanent location in the storage system in addition to the cache.
Caching may be done in RAM or in solid-state devices used specifically for caching. RAM cache can be protected by a battery or capacitor.
Caching has been used effectively to speed storage performance for many years. In the mainframe world, the caching is controlled with information communicated from the operating system. In open systems, the storage systems contain the intelligence to stage or leave copies of active data in the cache. Storage systems can cache read data only, or they can also accelerate writes.
If a storage system features tiering and caching, the features need to work in concert to avoid wasted or conflicting data movement. There can be improved performance if the two capabilities work together.
IT professionals need to consider the cost/benefit tradeoffs of tiering and caching. What performance is gained versus the cost? The overall performance benefit needs to be considered in the context of the workload from the applications that use the stored information. Most of the vendors of tiered storage systems have effective tools that analyze the environment and report on the effectiveness of tiering. This is necessary to optimize performance.
There is no easy answer to the choice of tiering, caching, or doing both in a storage system. It becomes a matter of maximizing the performance capabilities of the storage system and what value it brings in consolidation, reduced costs, and overall efficiency gains. An analysis of the value gained versus the cost must be done for any individual system.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
By Sonia R. Lelii, Senior News Writer
Brocade showcased its 1860 Fabric Adaptor at Storage Networking World (SNW) in Orlando, Fla., this week, which gives customers the option to implement 16 Gbps Fibre Channel, 10 Gigabit Ethernet (10 GbE) or Fibre Channel over Ethernet (FCoE) connectivity. The company describes the adapter as “any I/O.” But Brocade product marketing manager James. D. Myers doesn’t see many companies implementing FCoE so far.
“There isn’t a lot of adoption yet,” Myers said. “They are buying a lot of converged networks but they are not turning (FCoE) on yet. There are a few early adoptors. Most are hedging their bets. I think it will take upwards of a decade for FCoE to be prevalent.”
Brocade hasn’t been a huge advocate for FCoE the way its rival Cisco Systems has been. But at least one SNW attendee confirms Myers’ thoughts. Mitchel Weinberger, IT manager for the Seattle-based GeoEngineers, said he researched FCoE and found the performance gain wasn’t significant enough to introduce a new technology into his infrastucture. The company uses an iSCSI SAN from Compellent that connects 10 GbE switches to virtual servers.
“We don’t see the benefit,” Weinberger said. “All the studies I’ve seen say the benefits are minimal. We really didn’t see enough advantage to put Fibre Channel over Ethernet. It’s another technology for us to learn, and we don’t have the staff.”
FCoE basically encapsulates Fibre Channel frames over Ethernet networks, and the benefits includes the reduction of I/O adapters, cables and switches in the data center. But the convergence of Fibre Channel and Ethernet means storage and network administrators must share management responsibilities, or one team must cede control to the other. That can be a big problem in organizations where the two groups don’t get along.
“It makes total sense,” said Howard Marks, chief scientist at DeepStorage.net. “Except for the politics.”
By Sonia R. Lelii, Senior News Writer
Sepaton is looking to move beyond its data protection specialty into Big Data.
At Storage Networking World in Orlando, Fla., this week, new Sepaton CTO Jeffrey Tofano offered a broad description of where the vendor plans to go within the next five years. Tofano didn’t offer too many technology specifics, but said Sepaton’s plan is to position itself for the broader Big Data market.
The idea is to expand its use of NAS protocols to its backup products within the next year, then over the subsequent two years provide “solution stacks” for snapshot archiving, specialized archiving and data protection environments.The goal is to use all of that technology for nearline storage and big data. “Our technology is skating where the buck will be, and that’s Big Data,” said Tofano, who was previously CTO of Quantum Corp.
There still is a lot of marketing hype around the term Big Data, but Tofano puts it into two buckets: either large data sets or analytics of petabytes of data for business intelligence. Sepaton will be targeting both, he said, by using the company’s technology to “bring specialized processors closer to the storage to do clickstreaming, web logs or email logs.
“It turns out that Big Data is a perfect fit for our [technology] core, which is a scalable grid, content-aware deduplication and replication technology,” Tofano said. “Our technology is not the limiting factor. We have a lot of the pieces in place. We are not building a new box. We are refining a box to get into the Big Data market. Right now, we have a scalable repository bundled behind a Virtual Tape Library [VTL] personality.”
Tofano said the VTL market is mature, and this new direction does not mean Sepaton will get out of the backup space. Obviously, he said, it depends on revenues. “We will become more general purpose over time. We will do storage and support loads outside of data protection,” Tofano said.
Dell today revealed the first product it will release using data reduction technology from its Ocarina acquisition 15 months ago: The DX6000G Storage Compression Node (SCN) for its DX object storage system.
The DX6000G SCN is an appliance based on the Dell PowerEdge R410 server that connects to its DX6000 object storage nodes. Dell director of DX product marketing Brandon Canaday said the compression appliance can reduce data by 90%, depending on file types. Although Ocarina technology can dedupe or compress files, the object storage appliance will only use compression. It has two modes — Fast Compression mode is optimized for performance and Best Compression mode is optimized for capacity reduction. Customers can choose one or both modes.
Canady said customers can set policies to use fast compression when data is first brought onto the storage system and then switch to the best compression after a pre-configured time period. The appliance uses different compression algorithms depending on file type.
“It’s like applying tiered intelligent compression,” Canady said. “Because we maintain metadata with the file inside of the storage device, we can employ algorithmic policies as part of the lifecycle management of content.”
List price for the DX6000G SCN will begin at about $25,000, depending on the amount of data ingested. The appliance will become generally available next week.
Dell plans to incorporate Ocarina’s compression and deduplication across its storage systems, with more reduction products expected early next year. Canady said the performance and compression modes will likely show up in all of the data reduction appliances.
“Each implementation is likely to be slightly different, but we see value in having a performance approach and a capacity approach,” he said.
The OEM deal with Nirvanix provides IBM with the storage portion of its IBM SmartCloud Enterprise services. IBM bills the SmartCloud storage service as best suited for unstructured data for companies in media and entertainment, healthcare and financial services. Nirvanix’s object storage is designed for content shared across geographical locations and for data that must be retained for long periods. IBM pitches SmartCloud storage as an alternative to tape for backups and archiving.
Nirvanix CEO Scott Genereux said IBM is taking Nirvanix’s existing service without modification, but he plans for the startup to eventually offer IBM services optimized for the SmartCloud service. “There will be integration at the software level,” he said. “A year from now, IBM will have compelling differentiators from anybody else who sells our technology.”
Genereux said the partnership will help IBM and Nirvanix compete against storage vendors pushing cloud implementations, and gives Nirvanix extra ammunition to go up against Amazon S3, Microsoft Azure, and Google Cloud Storage services.
“There’s a big difference when IBM walks in and tries to sell the service than when we walk in as a startup,” he said.
Genereux also said IBM is looking to deliver a true cloud service, unlike storage vendors who try to sell what he calls “cloud in a box” by cloud-washing their hardware products.
“IBM Global Services and Nirvanix are both services companies, and we’re in complete synchronization selling services in a pay-by-the-drink model,” he said.
In the last few years we’ve seen advances in storage technology that have tremendous potential for IT customers. Some of these are enabled by investments made in developing flash solid-state drive (SSD) technology and adapting it to enterprise storage systems.
These new storage technologies improve the processes of storing and retrieving information. They reduce costs, and lead to greater storage efficiencies and capabilities.
Notable storage technologies that have been delivered relatively recently to customers include:
• Tiering products with the ability to place or move data based on a probability of access to maximize the performance of the storage system.
• All-SSD storage systems designed for near- zero latency to provide the highest level of performance in a storage system.
• Faster networks and network convergence where the pipes used in storage to move data allow greater bandwidth and bring the ability to standardize on one type of infrastructure throughout an organization.
• SSD add-in technology where the non-volatility and performance of SSDs can be exploited in a direct manner.
• Forward error-correction technology as a new way to protect data from a failure.
• Scale-out NAS systems to address enterprise demands for the volumetric increase in unstructured data to be stored.
There’s another type of product that has not received as much attention, perhaps because it is not simply a matter of using a new technology. That is the process of automating information management to make it easier for administrators to understand the value of data. This area may be more difficult to master, but new developments may yield greater economic value – primarily in operational expense – than some of the newer technologies.
An example of automated management would be what is called “Active Management.” This Active Management is similar to Storage Resource Management (SRM), but besides monitoring storage resources it takes automated actions based on conditions and rules defined by default or through customization by the storage administrator.
Yet despite their clear benefits, the adoption rate for these technologies and processes will be slow. That’s mainly because IT is conservative in adopting new concepts and technologies. This does not mean that the technologies won’t be embraced and be successfully deployed. It will just take longer.
The excitement around a new technology needs to be kept in perspective with how long it takes to be deployed successfully by IT. The technology adoption rate reflects the inertia and conservative nature for handling the critical task of storing data.
The longer technology adoption rate can kill start-up companies when they can’t get the investments required to get to the point of profitability. Many investors have an aggressive profile that defies the reality of adoption rates. Larger companies can handle the conservative adoption more successfully, but with much internal angst.
But no matter how great a new storage technology may be, the customer needs to understand it to make correct decisions about investing in it before it takes hold in data centers.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Startup FlashSoft, which was first out with flash caching software, is looking to stay ahead of impending competition with a major upgrade to its FlashSoft SE barely four months after making the software generally available.
FlashSoft today said FlashSoft SE 2 is in beta, with support for Linux, larger solid-state drives (SSDs) and read-only cache. FlashSoft SE improves application performance with solid-state storage by turning SSD and PCI Express (PCIe) server flash into a cache for the most frequently accessed data.
FlashSoft developers have been busy. CEO Ted Sanford claims 800 feature enhancements in version 2. Besides Linux support, the biggest improvements are an increase in the maximum cache from 256 GB to 1 TB (it now can accelerate up to 10 TB of active data), and the ability to detect a failing SSD, issue an alert, and switch the cache to pass-through mode to avoid writing data to any SSD without a redundant backup. FlashSoft SE 2 also has a new GUI that provides runtime data and statistics through the Windows Management Interface (WMI) API.
The software now supports read-only caching as well as read-write caching. Sanford said read-write caching works best for most applications, but apps with large read requirements such as video can benefit from read-only. FlashSoft also increased its read cache performance for large files and data sets.
In addition, FlashSoft upgraded FlashSoft SE-V for virtual servers to 2.0, adding support for virtual servers running on Linux and Windows.
FlashSoft SE pricing starts at $2,000 and depends on the number of servers accelerated and the type of SSDs used. FlashSoft sells FlashSoft SE as standalone software, but Sanford said he is working on partnerships to bundle the software with SSDs.
Sanford said the new features were mostly suggestions from early customers and organizations evaluating FlashSoft SE software. FlashSoft is probably also hearing footsteps from other flash cache competitors entering the market.
EMC‘s PCIe server-side cache “Project Lightning” product that uses EMC’s FAST software to improve performance is due out by the end of the year, as is Fusion-io’s ioCache that uses software that Fusion-io acquired from early FlashSoft competitor IO Turbine. SSD vendor STEC is sampling its EnhanceIO SSD Cache Software to partners and early customers, startup VeloBit is in beta with its SSD caching software, and Nevex and perhaps other startups are on the way with competing products.
“We feel like it’s a significant accomplishment that we’re shipping a 2.0 release in advance of anybody else shipping 1.0,” Sanford said. “We believe we will continue to advance the art. Leveraging flash as cache in existing servers or being added to new servers coming out is a large market. When you can improve an application’s performance by three to five times, that’s a fundamentally strong value proposition. There will be room for significant success for several companies.”