Storage Soup


June 27, 2012  10:58 AM

HDS growing VSP and NAS array sales, cautious on flash

Dave Raffo Dave Raffo Profile: Dave Raffo

Like other pure-play storage vendors, Hitachi Data Systems is growing revenue at double-digit rates despite the slow economy. But HDS is bucking industry trends with its growth.

HDS grew disk storage revenue by 11% year-over-year during the first quarter of 2012, according to IDC. That’s a bit slower than EMC (14.4%) and NetApp (11.1%), but much faster than IBM, Hewlett-Packard and Dell. HDS made great progress with NAS and enterprise SAN sales – two categories with slow or no growth. IDC said industry-wide NAS sales declined 1.9% during the first quarter, but HDS claims it increased NAS sales by more than 50%. And HDS high-end SAN sales grew more than 30% despite flat growth industry-wide for storage systems costing $250,000 or more.

HDS remains fifth in overall storage disk sales, but is gaining on No. 3 IBM and No. 4 HP.

Asim Zaheer, SVP of marketing for HDS, attributes the NAS spike to Hitachi’s acquisition of its long-time OEM partner BlueArc last September. HDS sold BlueArc NAS systems since 2006, but Zaheer said sales jumped after the $600 million acquisition. He said customers were looking for that commitment from HDS, especially after BlueArc indicated it might become a public company.

“Our belief is there was pent-up demand out there with potential new accounts relative to our long-term commitment to the technology,” he said. “They were waiting for a signal from us. BlueArc was discussing an IPO, but we took that concern off the table.”

The Virtual Storage Platform (VSP) enterprise SAN is Hitachi’s flagship product, and HDS picked up market share from EMC and IBM in that category. HDS likely benefitted from EMC’s transition to a new Symmetrix VMAX, but Zaheer said the VSP’s storage virtualization features also helped. “There’s quite an increase in customers virtualizing third-party arrays because of concern about budgets,” he said.

Zaheer said the hard drive shortage didn’t hurt HDS much. While it raised prices just as all its major competitors did, Zaheer said HDS shipped all of its orders in the first quarter. “We felt it, but we did not have to stop or delay shipment on anything,” he said. “I don’t know if we’re out of the woods yet, but our supply appears to be back to almost normal levels.”

HDS is less bullish on flash than its competitors, particularly EMC. So far, HDS’ flash offerings consist of the option to add solid-state drives (SSDs) to arrays. “The market is there, but it’s not exploding to the levels that EMC and others have predicted,” Zaheer said. “You have to have the right use cases and the economics have to make sense. If customers feel they need SSDs in their arrays, we can do that. It’s growing, but not like the hockey stick that everyone thought.”

Still, Zaheer said HDS is planning other flash products, such as all-SSD arrays and server-side flash, in anticipation that demand will grow. “You’ll see some announcements soon,” he said.

June 26, 2012  7:52 AM

DataCore teaches SANsymphony-V to play in the cloud

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

DataCore has upgraded its SANsymphony-V storage virtualization and management software to make it better suited to large enterprises and clouds. The vendor launched SANsymphony-V 9 today with new or expanded automated disk pooling, auto-tiering, asynchronous remote replication, synchronized mirroring, disk migration and load balancing.

Previous version of SANsymphony-V targeted the midmarket. With version 9, DataCore is going after large data centers, companies looking to build private cloud,s and cloud service providers with private, public or hybrid cloud offerings.

“We are trying to take it up a higher level,” DataCore CEO George Teixeira said. “We have automated tasks to make it simple, so you don’t have to focus on the details. Most of the commands and features have been made more adaptive.”

SANsymphony-V, which DataCore bills as a storage hypervisor, allows data on disk, solid state drives (SSDs), and Google cloud storage to be managed as a single pool. Auto-tiering can be applied so that administrators can put higher performing applications in memory, while archiving data into the cloud, Teixeira said.

DataCore also automated its N+1 scaling feature, allowing administrators to scale capacity and processors by adding one node at a time. The extra node can take over if any node in the cluster is lost.

Snapshots of multiple drives now can be done with a single click, and one-to-many bidirectional replication has been automated. Load balancing among multiple drives has also been automated.

DataCore is also adding reporting for chargeback and a DataCore Cloud Service Provider Program that offers new licensing options allowing CSPs to license the storage virtualization at a fixed monthly, per-Terabyte rate.


June 25, 2012  7:19 AM

Storage licensing issues add up to confusion, frustration

Randy Kerns Randy Kerns Profile: Randy Kerns

One of the most frequent questions that I get is how the licensing for a particular storage feature or software application is handled.

The variations from different vendors have left IT professionals wary of the complexities and costs. The wariness presents itself in different ways – pure distrust for certain vendors or frustration with having to plan for operational expenses not originally considered when estimating the acquisition price. There is often an interesting backstory around a previous issue with licensing.

Licensing is a way for vendors to get paid for their investment in developing a product or feature. The idea behind licensing with variable cost is to provide a graduated scale where the amount paid (the license cost) increases as value is gained. There may be inconsistencies in the applicability of the scale and — from a customer viewpoint — the value gained may not be the same for everyone.

The licensing terms can vary significantly, and the variance is one of the basic frustrations. Vendors may license storage hardware and software by capacity (raw capacity, usable capacity or actual space used), number of servers attached, initiators, network ports connected or processors, type of processors or the size of IT environments.

While a specific product may be clear on the licensing terms and the vendor can clearly explain the terms, IT is typically dealing with more products and more than one vendor. The inconsistency compounds the licensing frustration.

Administration of the licensing is also an aggravating area. There may be extra concerns when managing the licensing. How is the reporting done? Is there value for IT to move data, change configurations, or discontinue systems early to save on licensing fees? Managing to the details of licensing or even having to think about it can be aggravating.

For some storage systems where add-on, high-value features result in extra charges, vendors have established new practices for competitive purposes. Some have an “all-in” philosophy where there is no extra licensing charge for features. This hits at the high profit area for other vendors, which is why their competitors go to all-in licensing.

Another competitive area is where basic system capabilities or features have a charge (think of it as a one-time license) that the customer pays at the time of purchase of the hardware but it is tied to a specific serial number system. When new hardware is purchased, the software must be licensed again for the new platform. As a competitive area, some vendors allow the license to be transferred to new hardware.

Some of the angles that vendors use may give them opportunity to maximize their revenue or are really fair in receiving profit from their investment. The problem is the confusion and inconsistency when multiple vendors (and even multiple products from the same vendor) are considered.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 22, 2012  11:12 AM

X-IO’s hybrid storage gets thumbs up at TechEd

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

When X-IO won two Best of Microsoft TechEd awards last week, it was the second time in two months that X-IO CTO Steve Sicola felt that his technology was validated at a vendor show.

X-IO’s Hyper ISE storage system won Best Hardware and Storage Product and Attendees’ Pick at TechEd. Hyper ISE uses solid-state drives and hard drives in self-contained enclosures.

Sicola said EMC executives unintentionally endorsed Hyper ISE last month at EMC World by claiming the hybrid approach is best for flash.

“Even [EMC CEO Joe] Tucci said hybrid is the best SSD solution for the foreseeable future,” Sicola said. “We’re the guys who started hybrids.”

EMC is also preparing to launch an all-flash storage product next year from technology acquired from startup XtremIO. Sicola said X-IO has all-flash storage in its plans too, but sees plenty of value left in hard drives.

“We will do all-SSDs when it’s time,” he said. “It will be great when the price comes down. As SSDs get more mature – and the price curve is helping – you’ll see more SSDs hitting the market, but hard drives still do pretty well.”


June 21, 2012  8:03 AM

Will Bitcasa be su casa for files?

Dave Raffo Dave Raffo Profile: Dave Raffo

Bitcasa took in $7 million in Series A funding to enter the cloud storage market this week. The startup also launched an open beta for its consumer file storage cloud and declared plans to expand its service to businesses later this year.

Bitcasa CEO Tony Gauda hopes to lure consumers with the promise of unlimited cloud data for $10 per month. He said Bitcasa already stores more than a billion files and 4 PB of data on Amazon S3 from its private beta customers. The plan is for Bitcasa to eventually host its own cloud, Gauda said.

Within six months, Gauda hopes to have an SMB version that will likely be priced on a per seat basis.
“Today we are consumer-oriented,” he said. “But there’s a huge SMB enterprise play in our technology.”

Gauda said Bitcasa can serve as primary storage as well as backup and archiving. Bitcasa software integrates into the operating system on any computer running Windows, Mac OS X or Linux. The user clicks on a folder to make it “infinite.” Data written to the folder will go to the cloud but appear as if it is local on the device. Bitcasa compresses, deduplicates and encrypts data before sending to the cloud. It also caches frequently accessed files locally.

Installing in the OS might scare some people, but Gauda said it is necessary.

“Bitcasa intercepts OS calls and looks like a hard disk to the OS or applications running on that device,” Gauda said. “From a user perspective it’s transparent, that’s why we had to be in the OS.”

Does the world really need another cloud file storage service? Gauda is counting on it, because most of the services today don’t handle primary data.

“This is your primary copy – it’s always available, you can access it through clients installed,” he said.“Just use Explorer, go to where the folder usually is, and the data looks like it’s already there. Even without internet connectivity, you have access to data you use on regular basis because it’s cached.”

Bitcasa’s funding came from Pelion Venture Partners and Horizons Ventures, Andreessen Horowitz, First Round Capital, CrunchFund, and Samsung Ventures.


June 19, 2012  9:47 PM

Panzura snaps up $15 million in Series C funding

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Startup Panzura Inc. this week announced it closed a $15 million Series C funding, making it a total of $33 million the cloud NAS filer company has raised since September 2008. Venture capital backer Opus Capital led this latest round of funding that included existing investers Matrix Partners, Khosla Ventures and Chevron Technology Ventures.

The company, which currently has about 60 employees, plans to use the investment in research and development, but primarily to expand its sales and marketing. “We are going to grow that considerably. We will be well over 100 people by the end of the year,” said Panzura founder and CEO Randy Chou.

Founded in July 2008, Panzura raised $6 million in a Series A funding back in September 2008 and another $12 million in October 2010. The firm sells a cloud storage controller based on a mix of solid-state drives (SSDs) hard disk drives (HDDs) and aimed at public and private cloud storage. The Quicksilver cloud controller is based on the company’s cloud FS global file system.

Also, the Quicksilver controller features include global file locking, data deduplication and encryption. The controller supports CIFS and NFS, and provides offline capabilities through on-board SSD storage. The controller can be used with or without the cloud, and speeds the performance of storage arrays for data replication, backup or primary storage.

The company sells into the high-end enterprise, with deals ranging between $250,000 to $1 million, said Chou. Also, partners Hewlett-Packard, Nirvanix, EMC, Amazon and Google make up about 75 percent of its lead generations. “In two weeks alone, we got into 30 to 40 EMC deals. HP , Nirvanix and EMC brings the company into many of their enterprise cloud deals,” Chou said. “That is what attracted most of our investers in the Series C funding.”


June 18, 2012  8:03 PM

Caringo adds erasure codes, multi-tenancy for private storage clouds

Dave Raffo Dave Raffo Profile: Dave Raffo

Caringo is strengthening its hand for cloud storage with three new software products built on its CAStor object-based storage software.

The Elastic Content Protection (ECP), CloudScaler and Indexer are separately licensed products that can be used independently or in combination to build private and public clouds. ECP uses erasure codes to distribute data across locations, CloudScaler enables multi-tenancy and Indexer is a real-time indexing engine.

CAStor was originally developed as archiving software. Caringo CEO Mark Goros said customers already use CAStor for storage clouds, but features such as erasure codes and multi-tenancy make it better tailored for private clouds in large enterprises.

“We’ve had object storage software since 2006,” Goros said. “This is version six. That means it’s just coming of age, it’s at its peak prowess. Now we’re adding elastic content and erasure code protection.”

Dell uses Caringo software with its DX object storage platform, and Goros said he expects Dell will resell the new Caringo cloud services, too.

Caringo claims ECP can protect exabyte-scale storage by using erasure coding to divide objects and store slices in different places to allow data recovery if slices are lost. Other object storage products use erasure codes, including Cleversafe, AmpliData, Scality, EMC Atmos and DataDirect Networks Web Object Scaler (WOS). Some of these use the Reed-Solomon error correction code while others enhance Reed-Solomon.

Until now, Caringo used replication to protect its clusters. “Customers never had to worry about backups for CAStor clusters,” Goros said. “But as storage requirements get greater and we get to multiple petabytes, people are looking for ways to save space, power and cooling. You can now mix and match between replicas or erasure codes. For small data sets, you want to replicate because erasure code is not effective for that.”

CloudScaler consists of a software gateway appliance and a management portal. The gateway includes RESTful API and multi-tenant authentication and authorization capabilities. The portal provides tenant management and handles quotas, bandwidth and capacity metering. CloudScaler can be configured as public, private or hybrid cloud storage, but Goros said it is especially useful for building private clouds. He describes CloudScaler as “Amazon S3-like storage, but fast and secure in your own data center.”

The Indexer consists of a NoSQL data store that indexes objects in a CAStor cluster and allows searching by file name, unique identifier or metadata. The Indexer runs on separate hardware than CAStor but can integrate with the CloudScaler portal to present information in the GUI.


June 18, 2012  7:20 AM

Fundamental changes in data protection underway

Randy Kerns Randy Kerns Profile: Randy Kerns

Data protection is probably the most fundamental requirement in Information Technology (IT), and is generally aligned with storage overall. But, data protection is perceived as overhead — a tax on IT operations.

Because of that, data protection gets attention (and major funding) when there is a significant problem. There is an increasing problem in getting the protection done in the allotted time, meeting the recovery time objectives (RTO) and recovery point objectives (RPO). With capacity demand growing, the current methods of protecting data are being examined to improve the approaches.

At the Dell Storage Forum in Boston last week, there was more talk that IT has made a transition to include the use of snapshot and replication in the data protection process. Snapshots, or point-in-time copies that are synchronized with applications for a coherent snapshot copy, have become the primary means for making a copy that can meet the RTO for many of the primary cases where restores are required. About 90% of restores occur within 30 days of when that data was created or updated. The snapshots are typically done using features in the storage system, but may also use special host software.

Replication is typically a remote copy that is used for disaster protection and leveraged also for restores of data that may have been damaged (corrupted or deleted) locally. The mechanics of the recovery varies significantly between the different vendor solutions.

Backup is still used and still a valuable tool in the data protection arsenal. It is now just a part of the overall picture which includes snapshots and replication. Extensions to backup software are capitalizing on these transitions by IT and include such capabilities as invoking the storage system-based snapshots, managing the catalog of snapshot copies, and managing the remote copies of data.

Exploitation of storage system or hypervisor-based features such as Changed Block Tracking are another means to improve the data protection by reducing the amount of time required and the amount data. This is another developing area and will be a differentiator between different backup software solutions and the storage system hardware that has those capabilities.

Backup software will effectively need to be renamed to something that reflects that what it does goes beyond traditional backup.

The transitions occurring in data protection are being driven by IT to meet requirements to protect data while also meeting operational considerations. Software and hardware solutions can enable the transitions and make the operations more seamless. This will continue to be a developing area – both for vendor products and the adoption by IT.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 14, 2012  3:03 PM

NetApp jumping through Hadoops

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp is embracing Hadoop with a converged system combining its two major storage platforms with compute and networking from partners. The vendor also broadened its partnerships with Apache Hadoop companies this week by forging a joint partnership with Hortonworks.

The NetApp Open Solution for Hadoop Rack includes NetApp FAS and E-Series storage along with Hewlett-Packard servers and Cisco switches. The base configuration consists of four Hadoop servers, two FAS2040 storage modules, three E2660 NetApp storage modules for 360TB of storage, 12 compute servers and two Ethernet switches. The system scales with data expansion racks made up of four NetApp E2660 modules, 16 compute servers and two Cisco switches.

The FAS2040 – including NFS – is used in the Hadoop NameNode and the E2660 with Hadoop Distributed File System (HDFS) is used in the DataNode. The goal is to enable enterprises to move Apache Hadoop quickly from the test lab into production.

“We’ve taken the approach that there is an issue with the NameNode in Hadoop,” said Bill Peterson, who heads solutions marketing for NetApp’s Hadoop and “Big Data” systems. “If that crashes, you lose the entire Hadoop cluster. The community is fixing that so it will no longer be a single point of failure. We decided we would put a FAS box inside the solution, so we could do a snapshot of the NameNode. We use E-Series boxes for MapReduce jobs. So the database of record is on FAS and fast queries are on the E-Series.”

The NetApp Open Solution for Hadoop Rack became available this week.

NetApp also signed on to develop and pre-test Hadoop systems that use the new Hortonworks Data Platform (HDP), which became generally available Wednesday. NetApp joint solutions with Hortonworks are expected later this year. NetApp also has partnerships with Apache and Cloudera, and will support all three versions of Hadoop on its Open Solutions Rack.

“That’s why NetApp has open in the name. We want as many partnerships there as possible,” Peterson said.

For greater detail on using Hadoop with enterprise storage, I recommend the excellent series from John Webster of Evaluator Group on SearchStorage.com, beginning here.


June 12, 2012  10:07 AM

Violin, Microsoft play NAS flash duet

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Violin Memory is providing a window into its roadmap this week at Microsoft TechEd.

Violin and Microsoft are demonstrating what the vendors call a NAS “cluster-in-a-box” with Windows Server 2012 running natively on Violin’s 6000 Flash Memory Array. Violin intends to eventually ship the product as a specialized appliance to handle enterprise file services.

Violin’s current arrays handle block storage. For the NAS box, it added two x86 Intel servers to run Windows. Windows Server 2012 gives the array snapshot, deduplication and replication features.

Other appliances tuned to specific applications will likely follow, says Violin marketing VP Narayan Venkat.

“This cluster-in-a-box is intended to deliver highly scalable file services for large enterprises and internal private clouds,” Venkat said. “It’s the first in a possible series of application appliances. We’ll release the file services one first. The others may be database-in-a-box or private-cloud-in-a-box. We have a tremendous amount of interest from other OEMs. The types of applications that would leverage the 6000 would be databases, ‘big data’ analytics or massive VDI [virtual desktop infrastructure] in a box.”

Violin VP of corporate marketing Matt Barletta said the Violin 6000 has a street price of around $6 per gigabyte to $9 per gigabyte.

Violin has raised $180 million in funding since late 2009, making it the best funded of the all-flash storage array startups. Barletta said EMC helped prime the market for all-flash storage  when it spent $430 million to acquire XtremIO last month. The best part for Violin is that EMC won’t ship an XtremIO array until next year.

“My birthday is next week, and I view that as an early birthday present,” Barletta said.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: