Storage Soup


November 3, 2011  4:26 PM

Fusion-io CEO: EMC’s Project Lightning will cost too much

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io showed there is a hunger for server-based PCIe solid-state drive (SSD) accelerator cards by beating Wall Street estimates for sales last quarter, and CEO David Flynn said he’s not worried about EMC cutting into his success when the storage vendor comes out with its server-based flash product.

EMC has been touting its Project Lightning product since May. The product is in beta and expected to become generally available next month. But Fusion-io’s Flynn maintains putting flash in the server alongside high performance storage arrays is too expensive for widespread adoption.

“EMC is trying to make it additive to its existing business,” Flynn said during the vendor’s earnings conference call Wednesday night. “It’s relegated only to customers willing to pay an additional premium for performance on top of the premiums they already pay [for storage arrays].”

Flynn said using Fusion-io cards in servers allow customers to boost performance without using high-end storage arrays, keeping costs down. EMC obviously wants to continue selling storage arrays alongside servers with PCIe flash. Flynn questioned how many EMC storage system customers will want to pay for another flash device.

“We believe this isn’t about higher performance storage at yet a higher cost,” Flynn said. “This is about bringing cost way down. We believe customers will not pay twice, especially if the performance is solved out front. Instead, they will gravitate to lower cost solutions.”

He said Fusion-io’s IO Turbine software will let organizations get many of the storage management benefits of Project Lightning without EMC arrays.

Fusion-io, which became a public company with an IPO in March, reported revenue of $74.4 million and net income of $7.2 million for last quarter. That compares to $27 million in revenue and a $5.8 million loss a year ago when it was a private company. Fusion-io’s forecast for last quarter was in the $60 million to $65 million range.

November 2, 2011  3:53 PM

Cloud vision still unclear

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

During the EMC Forum 2011 hosted by the storage giant a few weeks ago, EMC president Pat Gelsinger described the still-young cloud computing era as “the most disruptive we have seen in the last 40 years.” He was talking about disruption for customers, but watching storage vendors deal with the cloud makes it clear that the cloud is also disruptive to their plans.

We’ve seen traditional storage vendors try to wash their legacy products as private, public and hybrid cloud technologies as they seek ways to continue selling those technologies under the cloud banner. Keeping storage clouds loosely defined is in their vested interest, at least until customers figure out exactly how – or if – they want to use the cloud.

EMC is a perfect example of a vendor looking to define cloud storage around its image. EMC originally hailed its Atmos object-based platform as its cloud product. But at its recent Forum, EMC showcased Isilon scale-out NAS and its VNX midrange unified storage platform as private and hybrid cloud products. Atmos was hardly mentioned.

EMC is also among the vendors who talk of server virtualization as a fundamental cloud technology. That’s no surprise, because EMC is majority owner of server virtualization market leader VMware and many of its customers have already gone down the virtualization path or are planning to do so. When asked to define the cloud, Gelsinger mentioned virtualization, having a shared pool of computing networking and storage, and an automated managed environment. “What we call IT as a service,” he said.

There seems to be as many definitions of storage clouds as there are people in the storage industry.

Let us know what you think.


October 31, 2011  7:30 PM

Mastering DR is a critical skill for storage pros

Randy Kerns Randy Kerns Profile: Randy Kerns

When working with storage professionals, I always try to understand where storage fits in their organization’s strategic initiatives. The business environment they work in and how they interact with the business owners of critical applications will explain a great deal about the opportunities and limitations for improving their storage strategy.

Storage professionals interact with business owners in a variety of ways. These include:

  • The storage team partners with the business owners in planning storage and data protection.
  • The storage group is perceived as a resource to be called upon by the business owners. The group provides storage at a particular rate (ie “gold level”) which dictates performance, data protection and cost.
  • The business owners are less than cooperative with the storage team, making demands while providing little planning or guidelines regarding their needs. And the business owners complain that storage provisioning is always holding them back.

There are variations of these, and some extreme cases that make for interesting discussions, but storage professionals always raise one common point. That is, when it comes to business continuance/disaster recovery (BC/DR), the storage group plays a key role in putting together an effective solution. Planning, implementing, and periodically testing BC and DR for a business or organization are complicated, costly and necessary processes for most organizations. This is where the storage team is a critical resource, and its influence reaches into the deployment of storage for critical applications.

Planning BC and DR requires an expertise gained from experience. Storage people generally understand this, and can leverage these processes for making more effective and long-term storage decisions.

Understanding all the options and technologies involved in BC and DR is an important skill for storage professionals. They need to be continually learning about technologies and products to be effective. This information will help them make decisions at critical moments about deploying applications that can add to the success of a company.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 27, 2011  8:52 PM

LSI accelerates its move into flash

Dave Raffo Dave Raffo Profile: Dave Raffo

LSI, which left the storage systems business this year, is going full bore into the enterprise flash business.

LSI acquired flash controller chip vendor SandForce for $322 million Wednesday, seven months after it sold off its Engenio storage systems division to NetApp for $480 million. LSI already had an equity stake in SandForce and is one of its customers. SandForce also sells chips to OCZ, Smart Modular, Viking Technology and others.

LSI uses SandForce’s solid-state drive (SSD) chip in its server-based WarpDrive PCIe cards. When the SandForce deal closes – probably in January – LSI will have more control over that technology at a time when server-based PCIe flash will be gaining a lot of attention.

Fusion-io turned its early dominance of PCIe flash for enterprises into a successful IPO, and competitors are lining up to challenge Fusion-io. EMC is among them with its Project Lightning product that is in beta and expected to ship by the end of 2011. Industry sources say EMC will use PCIe cards from Micron and LSI as part of Project Lightning. LSI executives won’t name their OEM customers, but LSI CEO Abhi Talwalkar said Wednesday that he expects a major storage vendor to start selling WarpDrive adapters at the end of this year.

Gary Smerdon, vice president of LSI’s accelerated solutions division, said owning the flash controller chip technology will result in tighter integration of LSI’s flash and management products. The acquisition also guarantees that LSI can keep the flash IP that is already in its products.

“We believe the market for PCIe flash adapters is a rapidly growing market,” Smerdon said. “Now we have a division to specifically focus on the PCIe opportunity. We’re using SandForce’s FSP (flash storage processor), but we didn’t want to talk about a lot of the benefits before because that begs the questions, ‘Where are you getting this from?,’ and ‘What happens if something happens to [SandForce]?’”

LSI executives say they intend to keep SandForce’s customers, too. At least one seems happy to stay onboard for now. After the deal was announced, OCZ CEO Ryan Petersen released a statement saying “SandForce has been a great partner, and we expect the added resources of LSI will only benefit SandForce’s customers. Moreover, because OCZ and SandForce previously contemplated this scenario, we expect that this combination will have no material impact to our existing product lines or business.”

OCZ is SandForce’s largest customer and is responsible for most of SandForce’s revenue, which is expected to be around $60 million this year.

SandForce is the second SSD device startup acquired this year. SanDisk acquired Pliant Technology for $327 million in May.


October 27, 2011  12:55 PM

Index Engines revs its discovery appliance

Dave Raffo Brein Matturro Profile: Brein Matturro

By Todd Erickson, News and Features Writer

Index Engines is giving its e-discovery platform a facelift with a new look and new features.

Index Engines renamed its products, bringing them all under the new Octane brand. The latest version is Octane 4, and has a re-designed GUI and compliance archive to make it easier to search and collect data.

Index Engines added a departmental archive to its policy-based information management platform to let storage administrators and legal teams capture, retain and secure litigation and compliance related files and email messages. The scalable archive can be created within the Octane 4 appliance or on another disk for long-term retention and legal collection.

Jim McGann, Index Engines’ vice president of marketing, called the new archive a “sandbox” for legal and compliance teams because once the archive is populated, lawyers can narrow and refine searches for relevant litigation and compliance related data.

The 2U Linux-based appliance hooks into the network and can auto-discover information sources based on IP addresses, or you can point it at your file, email and backup resources. The collection engine can collect data from many sources, including file shares, Exchange servers, and diskand tape backup.

Customers can scan and copy information into the archive based on user created policies — such as date ranges, custodians, document types, and keywords — and schedule it to automatically collect changed or new files.  This means storage administrators working with legal teams don’t have to keep going back to do new searches to update the archive.

Pricing for an Octane 4 appliance  starts at $50,000 for 100-user accounts.


October 26, 2011  8:45 PM

Violin tunes up for Big Data analytics

Dave Raffo Dave Raffo Profile: Dave Raffo

Violin Memory CTO of software Jonathan Goldick sees solid state playing a key role in storage for Big Data, and he’s not talking about scale-out NAS for large data stores.

Goldick says solid-state drives (SSDs) can help run analytics for Hadoop and NoSQL databases better in storage racks than in shared-nothing server configurations.

“We’re focused on the analytics end of Big Data – getting Hadoop and NoSQL into reliable infrastructures while getting them to scale out horizontally,” he said. “Scale-out NAS is a different part of the market.”

Today, Violin said its 3000 Series flash Memory Arrays have been certified to work with IBM’s SAN Volume Controller (SVC) storage virtualization arrays. Goldick pointed to this combination as one way that Violin technology can help optimize Big Data analytics. The vendors say SVC’s FlashCopy, Easy Tier, live migration and replication data management capabilities work with Violin arrays.

Goldick said running Violin’s SSDs with storage systems speeds the Hadoop “shuffle phase” and provides more IOPS without having to add spindles. SVC brings the management features that Violin’s array lacks.

“Hadoop is well-optimized for SATA drives, but there’s always a phase when it’s doing random I/O called the ‘shuffle phase,’ and you’re stalled waiting for disks to catch up,” said Goldick, who came to Violin from LSI to set the startup’s data management strategy. “We’re looking at a hybrid storage model for Big Data. You’ve heard of top-of-the-rack switches, we look at Violin as the middle-of-the-rack array. It gives you fault tolerance and the high performance you need to make Big Data applications run at real-time speeds.”

He said Hadoop holds data in transient data stores and persistent data stores. It’s the persistent data – which is becoming more prevalent in Hadoop architectures – where flash can help. “So you think of Hadoop not just as analytics but as a storage platform,” he said. “That’s where IBM SVC bridges a gap for us. When data is transient you don’t need data management services as much. When you start keeping the data there, it becomes a persistent data store of petabytes of information. You need data management features that enterprise users have come to expect – things like snapshotting, metro-clustering, fault tolerance over distance.”

Violin’s 3000 series is also certified on EMC’s Vplex federated storage system. EMC is talking about Big Data more than any other storage vendor, with its Isilon clustered NAS as well as its Greenplum analytics systems. EMC president Pat Gelsinger last week said Big Data technologies will be the focus of EMC’s acquisitions over the coming months.

If Goldick is correct, we’ll be hearing a lot more about Big Data analytics in storage.

“Last year Big Data was about getting it to work,” he said. “This year it’s about optimizing performance for a rack. People don’t want to run thousands of servers if they can get the efficiency from a rack.”

There are other ways of using SSDs to speed analytics — inside arrays, or as PCIe cards in storage systems or servers. Violin’s Big Data success will be determined by its performance against a crowded field of competitors.

    0 Comments     RSS Feed     Email a friend


October 26, 2011  3:13 PM

Who makes the call on archiving?

Randy Kerns Randy Kerns Profile: Randy Kerns

Data archiving makes sense when primary storage gets filled up with data that is no longer active. Data growth on primary storage – the highest performing storage with the most frequent data protection policies – results in increasing capital and operational costs.

Organizations can save money by moving the inactive data or data with a low probability of access to secondary storage or archive storage. The question is, who owns the decision of what to move?

IT directors and managers I’ve talked to have a mixed response to that question. Some say it is the business unit’s decision, but IT cannot get a response from them about what data can be archived or moved to secondary storage. Others say that IT has the responsibility but does not have the systems or software in place to do the archiving effectively, usually because they lack a budget for this. And a few say it is IT’s responsibility, and they are in the process of archiving data.

Those who archive with the initiative coming from IT say it is important to make the archiving and retrieval seamless from the user standpoint. Seamless means the user can access archived data without needing to know that the data has been archived or moved. It’s acceptable if the retrieval takes a few extra seconds, as long as there are no extra steps (operations) added to the user’s access.

Implementing archives with seamless access and rules-based archiving by IT requires specific system capabilities. These systems must work at the file system (or NAS) level to be able to move data to secondary or archive systems, and then to retrieve that data.

External tiering, or archiving, is highlighted in the Evaluator Group report that can be downloaded here.  This is a major tool in the IT repertoire to help control costs and meet expanding capacity demands. The decision-process about archiving needs to be made by IT, but requires the system capabilities to make it a seamless activity for users.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 25, 2011  8:06 AM

HDS rolls out private cloud services, eyes Big Data

Randy Kerns Brein Matturro Profile: Brein Matturro

By Sonia R. Lelii, Senior News Writer

Hitachi Data Systems is putting technology from its BlueArc and Parascale acquisitions to work in its private storage cloud and Big Data plans.

HDS today upgraded its Cloud Service for Private File Tiering, and rolled out its Cloud Service for File Serving and Cloud Service for Microsoft SharePoint Archiving as part of its infrastructure cloud strategy.

HDS also outlined its vision for its infrastructure, content and information clouds. BlueArc’s NAS products will provide file storage capabilities in the infrastructure and content clouds while Parascale Cloud Storage (PCS) fits into the content and information clouds.

HDS acquired Parascale for an undisclosed price in August 2010 and bought its long-time NAS OEM partner BlueArc for $600 million last month.

HDS’ strategy is to make its content cloud a single platform for data indexing, search and discovery.

HDS rolled out its Private File Tiering service in June 2010 for tiering data from a NetApp filer to the Hitachi Content Platform (HCP). Now it adds HCP support for EMC NAS. The file service and SharePoint cloud services let users share files and SharePoint from different geographic locations over a LAN, WAN or MAN. These services require a Hitachi Data Ingestor (HDI) caching device in remote sites or branches to tier data to a central location that houses the HCP.

Tanya Loughlin, HDS’ manager of cloud product marketing, said these services already exist but now HDS is packaging them as a cloud that it will manage for customers. The cloud services include a management portal to access billing, payment and chargeback information.

“It’s a private cloud service,” Loughlin said “Customers don’t have to pay for hardware. They pay on a per-gigabyte basis. This is a way to augment staff and push some of the less-used data to us. We’ll manage it.”

Pricing is not available yet. “We are finalizing that now,” she said. “The products that fit into these services are already priced, so this is a bundling exercise now.”

HDS plans to tackle Big Data through its information cloud strategy by integrating analytics tools and processes into PCS. PCS aggregates Linux servers into one virtual storage appliance for structured and unstructured data. Loughlin said HDS will also use Parascale, BlueArc NAS and the HDS Virtual Storage Platform (VSP) SAN array to connect data sets and identify patterns for business intelligence in the health, life sciences and energy research fields.


October 24, 2011  8:18 PM

Quantum adds SMB NAS and backup, eyes the cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum today took a break from upgrading its DXi data deduplication platform, and rolled out its first Windows-based NAS systems and expanded its RDX removable hard drive family. The SMB products include a new backup deduplication application.

Quantum launched two NAS boxes based on the Windows Storage System OS. The NDX-8 is an 8 TB primary storage system that uses an Intel Core i3 3.3 GHz processor and 4 GB of RAM with four 2 TB drives. The NDX-8d is a backup system based on the same hardware with Quantum’s Datastor Shield agentless backup software with data deduplication installed. The NDX-8d includes licenses to back up 10 Windows desktops or laptops and one Windows server or virtual server.

The NAS systems are available in 1U or tower configurations. Pricing starts at $4,029 for the NDX-8 and $5,139 for the NDX-8d.

Quantum also rolled out its RDX 8000 removable disk library, its first automated RDX system to go with its current desktop models. The RDX 8000 has eight slots for RDX cartridges, which range in capacity from 160 GB to 1 TB. The RDX 8000 comes pre-configured with Datastor Shield or Symantec Backup Exec Quickstart software.

The RDX 8000 costs $3,889 with Backup Exec and $4,999 with Datastor Shield. John Goode, director of Quantum’s devices product line, said he expects that customers will use two-third fewer cartridges with the Datastor Shield dedupe.

“We felt it was important with disk backup to use deduplication,” Goode said.

Datastor Shield has a different code base than Quantum’s DXi dedupe for its disk target systems. The biggest difference is it does a bit-level compare while the DXi software performs variable block dedupe.

Backup Exec Quickstart is good for one server. If customers need to backup more servers, they must upgrade to the full Backup Exec application.

Datastor Shield can replicate between NDX-8 and RDX boxes, and Goode said it will be able to replicate data to the cloud in early 2012. He said Quantum will offer customers cloud subscriptions and work with a cloud provider, and will also have cloud-seeding options.


October 24, 2011  3:28 PM

EMC changes the channel on Dell sales

Dave Raffo Brein Matturro Profile: Brein Matturro

By Sonia R. Lelii, Senior News Writer

EMC president Pat Gelsinger said EMC had already moved on by the time Dell officially ended their storage partnership last week after a 10-year relationship.

Gelsinger said it was no secret that EMC’s partnership with Dell had to drastically change or end after Dell expanded its storage presence by acquiring EMC competitors EqualLogic and Compellent.

“It got to a natural point where the relationship had to be restructured or it had to come to an end. Unfortunately, it came to an end,” Gelsinger told a group of reporters last Thursday at EMC Forum 2011, held at Gillette Stadium in Foxborough, Mass.

Dell sold EMC’s Clariion, Celerra, Data Domain and VNX systems through OEM and reseller deals, with the bulk of the revenue generated from Clariion midrange SAN sales. Dell will also no longer manufacture EMC’s low-end Clariion.

Dell’s revenue sales for EMC’s channel have been sliding downward since last year, Gelsinger said. EMC reported Dell-generated revenue of $55 million in the fourth quarter of 2010, and that fell to under $40 million in the first quarter of this year. EMC has not given a figure for Dell revenue since then, but its executives said its non-Dell channel sales for the mid-tier increased 44% year-over-year in the third quarter of this year.

EMC has built up its channel this year, making the SMB VNXe product a channel-only offering that directly competes with Dell products. Earlier this month, EMC launched a channel-only Data Domain DD160 SMB system.

EMC has also continued to upgrade its VNX midrange platform. Last week it launched an all-flash model (VNX5500-F) as well as a high-bandwidth VNX5500 option with four extra 6 Gbps SAS ports, and support for 3 TB SAS drives throughout the VNX family.

“Now that we are no longer continuing forward [with Dell], we have to do it ourselves,” Gelsinger said. “It’s a clear, simple focus on our part.”

Dell began selling EMC storage in 2001, and in late 2008 the vendors said they were extending their OEM agreement through 2013. Dell also widened the deal in March 2010 by adding EMC Celerra NAS and Data Domain deduplication backup appliances to their OEM arrangement. However, the relationship had already started to deteriorate by then, going back to when Dell acquired  EqualLogic in early 2008.

The rift became irreparable last year when Dell followed an unsuccessful bid for 3PAR by completing an $820 million acquisition of Compellent in December.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: