Storage Soup


October 3, 2014  9:56 PM

Red Hat faces long-term decisions on Gluster, Ceph in storage portfolio

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

While launching the latest version of Red Hat Storage Server yesterday, the vendor provided little insight into the long-term positioning of its storage software portfolio and the chances that it might combine its Gluster-based Storage Server and Inktank Ceph Enterprise product lines.

Ranga Rangachari, vice president and general manager of storage and big data at Red Hat, said the company hopes to “get back to our customers and partners in the very near future with a consolidated vision of where this journey is going.” He addressed the topic in response to a question during the company’s Webcast entitled “Advancing software-defined storage,” which he said customers view as the ability to take advantage of industry-standard x86 servers with the intelligence resting in the software.

Rangachari noted simply that Red Hat’s acquisition of Inktank Storage Inc. this year brought object- and block-based storage to the table and complemented the file system capabilities the company gained through its 2011 acquisition of Gluster Inc.

Gluster had sold a supported version of the open source GlusterFS distributed file system in much the same way that Inktank sold a supported version of open source Ceph. Any innovative software development work rests with their respective open source project communities.

“The Gluster and the Ceph communities continue to thrive independently and thrive really well,” said Rangachari, claiming that Gluster and Ceph combined for almost two million downloads during the last nine months. “The innovation that’s going on on both those projects will continue to happen unabated.”

Red Hat put out new versions of each of the commercially supported products this year. Storage Server 3, launched yesterday, is based on open source GlusterFS 3.6 and adds support for snapshots, multi-petabyte scale-out capacity, flash drives and Hadoop-based data analytics. Inktank Ceph Enterprise 1.2, released in July, was based on open source Ceph’s Firefly release and added erasure coding, cache tiering and updated tools to manage and monitor the distributed object storage cluster.

The Ceph open source project claims to be a unified system providing object, block and file system storage. Ceph’s file system runs on top of the same object storage system that provides object storage and block device interfaces, according to the project’s Web site.

“It’s fair to say that file is probably the least well evolved of those three,” said Simon Robinson, a research vice president in storage at New York-based 451 Research LLC. “The file capability is very immature. It’s not enterprise-grade.”

But, as the Ceph technology improves, Red Hat will need to confront the question of whether to continue to focus on Gluster and Ceph, said Robinson.

“I think Red Hat’s bet buying Gluster was, ‘Hey, look at all this unstructured data. Look how quickly it’s growing. We need a play here.’ Three years ago, that play was NAS. Today it looks slightly different,” said Robinson. “When we think about the growth of unstructured data, it’s actually object that is seen as the future architecture rather than NAS.”

He cited Amazon and Microsoft Azure as proof points of the object model working at scale. “It’s just a case of how does that percolate down into the enterprise. It will take time,” he said.

Robinson said he doesn’t think it makes sense for Red Hat to physically merge Gluster and Ceph. He predicted that if Red Hat Storage does catch on, its success will be through Ceph – “the darling of the storage startup world” – tied to the broader success of the open source OpenStack cloud technology platform. Ceph has already started to gain momentum among cloud service providers, he said.

“Everybody’s playing with OpenStack, and if you’re playing with OpenStack, you’ve probably heard of Ceph. And Ceph has the interest of the broader storage community,” said Robinson. “Other big players are really interested in making Ceph a success. That works for Red Hat’s advantage.”

Henry Baltazar, a senior analyst at Cambridge, Massachusetts-based Forrester Research Inc., said he sees no problem with Red Hat having Gluster-based file and Ceph-based block and object storage options at this point, since the company doesn’t have much market share.

“They’re going to have two platforms in the foreseeable future. Those aren’t going to merge,” predicted Baltazar. “Gluster is definitely the file storage type. There are ways they could use it that can complement Ceph. It still remains to be seen where it will wind up 10 years from now.”

October 3, 2014  4:21 PM

EMC links arms with CloudLink for DRaas

Dave Raffo Dave Raffo Profile: Dave Raffo
Disaster Recovery, DRaaS, EMC, Storage

EMC is aiming its new RecoverPoint for Virtual Machines at cloud DR, in partnership with cloud security vendor CloudLink Technologies.

RecoverPoint for Virtual Machines is a hypervisor-based version of EMC’s RecoverPoint replication software. It will be generally available Nov. 17. EMC will also integrate the product with CloudLinksSecureVSA, which provides encryption for data at rest and data in motion.

The combined products can allow service providers to build DR as a Service (DRaaS), and enterprises can use them to replicate data to private or public clouds for DR.

RecoverPoint for Virtual Machines is a software-only product. Unlike previous versions of RecoverPoint, it is storage-agnostic so it doesn’t require EMC arrays to run. It works with any VWware certified storage. It is not hypervisor-agnostic yet, though. It supports VMware vSphere today with support for Microsoft Hyper-V and KVM hypervisors on the roadmap.

It is EMC’s first replication software that works on the individual VM level. Instead of replicating storage LUNs as other RecoverPoint versions do, RecoverPoint for Virtual Machines splits and replicates writes for VMware vSphere VMs. It requires splitter code on each NSX node running protected VMs, and at least one virtual appliance at each site. Customers can replicate VMs regardless of hardware running at either end.

CloudLinksSecure VSA adds security. It allows customers to store and manage encryption keys on-premise.

“One of the big inhibitors of going to a public cloud is security,” said Jean Banko, director of product marketing for EMC’s data protection division. “That’s why we partnered with CloudLink.”


October 3, 2014  2:45 PM

Nimble, Actifio show their Oracle chops

Dave Raffo Dave Raffo Profile: Dave Raffo
Actifio, Oracle OpenWorld, Storage

Oracle wasn’t the only vendor to toot its own storage horn at Oracle OpenWorld this week. A couple of smaller vendors played up their budding relationship with the database giant to show how they are growing their number of enterprise customers.

Array vendor Nimble Storage added a pre-validated SmartStack reference architecture for Oracle’s JD Edwards EnterpriseOne application and copy data management pioneer Actifio increased its integration with Oracle apps to attract more database administrator customers.

Nimble first introduced its SmartStack reference architectures in late 2012 through a partnership with Cisco, and claims more than 200 SmartStack customers. The JD Edwards version consists of Nimble’s CS300 storage array, Cisco UCS Mini Server, Oracle VM, Oracle Linux, Oracle Database, and JD Edwards EnterpriseOne 9.1.

Nimble’s previous SmartStack flavors include VDI for Citrix and VMware, Microsoft Critical Applications, Oracle Critical Applications, Desktop/Server Virtualization, Server Virtualization/Private Cloud and Data Protection with CommVault.

Radhika Krishnan, Nimble VP of product marketing and alliances, said the JD Edwards SmartStack came about because more enterprises are starting to deploy Nimble storage, and JD Edwards can be a tricky app to size correctly.

“Sizing tends to be challenging, depending on the number of end users you have,” she said.

Actifio used the conference to show off its expanded support for Oracle applications, allowing it to become an Oracle Platinum Level partner. The expanded support for Actifio CDS includes Oracle Database 12c, RMAN and Exadata products.

The value of Actifio’s software is it allows organizations to uses one copy of data for production, backup, test/development or any other application.

Actifio reps said they modified their platform’s workflows to enable DBAs to automate the data flow, with the help of RESTful APIs.

Oracle DBAs can use the automated workflow to provide copies of their databases to developers in minutes, according to Actifio senior director of global marketing Andre Gilman.

“I call it RMAN on steroids,” he said. “The old school way can take days to weeks, and even with newer technologies it takes hours.

“You can create a live clone of a database and update a virtual image to all your developers at the same time. You don’t have to repeat the process as part of daily maintenance. You put it in as one workflow and it’s all automated.”

Actifio director of product marketing Chris Carrier said the integration came from months of co-development work with Oracle. He said Actifio uses RMAN to do its change-block tracking and built its LogSmart log management system specifically around Oracle, although it works with other databases. “If you show Oracle DBAs a different way to manage data, they get nervous. But if you’re leveraging RMAN, they like that,” Carrier said.


September 30, 2014  4:04 PM

EMC’s XtremIO 3.0 adds compression with disruptive upgrade for existing users

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

EMC Corp. made available the 3.0 release of its XtremIO all-flash array today with new inline compression capabilities and performance improvements – but existing customers who want the software upgrade need to prepare for significant disruption to their production environments.

Josh Goldstein, vice president of marketing and product management for EMC’s XtremIO business unit, confirmed that users will need to move all of their data off of their XtremIO arrays to do the upgrade and then move it back onto the system once the work is complete.

EMC’s XtremIO division has taken some heat on the disruptive – and some say “destructive” – nature of the upgrade, especially in view of the company’s prior claims that the product supported non-disruptive upgrades (NDU) of software.

Goldstein said the company decided to make an exception for the 3.0 upgrade based on customer input about the inline compression capabilities, which he claimed could double the usable capacity of an XtremIO array in many cases.

“This was a choice that was made, and it was not an easy choice,” said Goldstein. “We could have delayed the feature. Originally we were planning to put this in later in the roadmap. If we had chosen to, we could have tied this to another hardware release, and it would have been something that existing customers could never take advantage of. Our customer base told us emphatically that that was not what they wanted.”

Goldstein said that EMC will provide, at no cost to customers, the option of professional services and extra XtremIO “swing” capacity, if necessary, to ensure that they have an equivalent system on which to put their data while the upgrade is taking place.

The disruptive nature of the 3.0 upgrade came to light recently through an “XtremIO Gotcha” blog post from Andrew Dauncey, a leader of the Melbourne, Australia, VMware user group (VMUG). Dauncey wrote: “As a customer with limited funds, this is the only array for a VDI project, where the business runs 24/7, so to have to wipe the array has massive impacts.” He said a systems integrator had offered a loan device to help with the upgrade.

Dauncey worked as a systems engineer at a public hospital in Australia at the time of his initial post on Sept. 14. He has since gone to work for IBM as a virtualization specialist.

In a blog post last Sunday, Dauncey noted EMC’s “marketing collateral” that advertised “non-disruptive software and firmware upgrades to ensure 7×24 continuous operations,” and he accused EMC of “false advertising” prior to the release of the updated 3.0 firmware.

Goldstein said, “The releases that we’ve had from the time the product went GA up until now were all NDU. The releases that we have going forward after this point will all be NDU as well.”

Chris Evans, an IT consultant who writes the blog “Architecting IT,” said via an e-mail that HP upgraded its platform to cater to flash, and SolidFire released a new on-disk structure in the operating system for its all-flash array without disruptive upgrades. He said, what’s surprising in the XtremIO case is “that EMC didn’t foresee the volume of memory to store metadata only 12 months after their first release of code.”

Chad Sakac, a senior vice president of global systems engineering at EMC, shed some light on the technical underpinnings of the upgrade through his “Virtual Geek” personal blog, which he said has no affiliation to the company. He said the 2.4 to 3.0 upgrade touches both the layout structure and metadata indirection layer, and as a result, is disruptive to the host. He pointed to what he said were similar examples from “the vendor ecosystem.”

Goldstein confirmed that the block size is changing from 4KB to 8 KB, but he said the block-size change is not the main reason for the disruptive upgrade. He said it’s “all these things taken together” that the company is doing to both add compression and improve performance.

“We already had inline deduplication in the array, and that means that you have to have metadata structures that can describe how to take unique blocks and reconstitute them into the information that the customers originally stored,” Goldstein said. “When you add inline compression, you have to have similar metadata information about how to reconstitute compressed blocks into what the customer originally stored. Those kinds of changes are things that change the data structures in the array, and that’s what we had to update.”

Goldstein said customers should not have to endure an outage in the “vast majority of cases.” Goldstein claimed that XtremIO is “overwhelmingly used” in virtual environments and moving virtualized workloads is not difficult. He mentioned VMware’s Storage VMotion and EMC’s PowerPath Migration Enabler as two of the main options to help, but he said there are others.

Customers also may choose to remain on the 2.4 code that EMC released in May. Goldstein said that EMC will continue to provide bug fixes on the prior 2.4 release for “quite a long time.”

“There’s nothing forcing them to upgrade,” he said.

Craig Englund, principal architect at Boston Scientific Corp., said EMC contacted Boston Scientific’s management about the disruptive upgrade in the spring. At the time, the IT team already had a loaner array for test purposes, and they asked to keep it longer after learning the 3.0 upgrade was destructive.

“It reformats the array. It’s destructive. It’s not just disruptive,” Englund said. “You have to move all of your data off the array for them to perform the upgrade.”

But, Englund said the team can “move things around storage-wise non-disruptively” because the environment is highly virtualized through VMware. He said he’s willing to go through the inconvenience to gain the ability to run more virtual desktops and SQL Server databases on the company’s existing XtremIO hardware. Early tests have shown a 1.9 to 1 capacity improvement for the database workloads and 1.3 to 1 for VDI, he said.

“They could have said, ‘If you want these new features, it’s coming out in the next hardware platform, and you’ll have to buy another frame to get it.’ But, they didn’t, and I think that’s great,” Englund said. “To try to get this out to all of the existing customers before they get too many workloads on them, I think, was considerate.”


September 30, 2014  2:28 PM

All those video cameras can be a boon for storage companies

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Whenever Big Brother is watching, there is a storage vendor eager to store whatever Big Brother is seeing. And today, Big Brother is watching more places than ever before.

The video surveillance market consisting of cameras, software, DVRs, storage and other hardware is expected to reach $26 billion by 2018 and is growing twice as fast as the overall IT market, according to market research firm IHS. The main accelerators of that growth projection are the common use of surveillance in markets such as government, city surveillance and transportation, and the transformation of video from analog to more capacity-hungry digital.

So it’s no surprise that storage stalwarts EMC and Seagate are pushing hard into video surveillance.

EMC today launched a video surveillance practice, which includes a VNX array and partnerships tailored to the market. The VNX-VSS100 is configured for video surveillance cameras on the edge, with 4 TB nearline SAS drives and a mix of memory and connectivity to handle video files. The VNX-VSS100 comes in 24 TB and 120 TB configurations, and has been validated with video surveillance software and cameras, according to Michael Gallant, senior director of EMC’s video surveillance practice.

EMC has also tested its Isilon scale-out NAS arrays for core storage of video surveillance data in the data center. EMC has tested its storage with surveillance technology vendors such as Axis, Genetec, Milesone and Verint. Its video surveillance distributor and integrator partners include Avnet, Ingram Micro, and Scansource.

Gallant said EMC has been in the video surveillance market for eight years but the VNX-VSS100 is its first main storage platform built specifically for the market.

He said storage is the fastest growing part of the video surveillance market, and is expected to be around $3 billion in 2016.

“This is one of the most storage intensive application workloads,” Gallant said. “Governments are requiring longer retention of video, and the data collected is considered more valuable now. Organizations are putting a lot of edge storage devices to cover subway systems, railway stations and bus stations. There is a need for highly available high performance storage at the edge, and that data is being brought back to the core.”

Seagate today launched the Seagate Surveillance HDD, a hard –drive available in 1 TB to 4 TB capacities with 5 TB and 6 TD versions expected by the end of the year. The drive includes Seagate Rescue services, which the vendor said can typically restore data within two weeks with a more than 90 percent data recovery success rate. The drive is designed for large streaming workloads used in video surveillance, and has a one million hour mean time between failure MTBF rate to stay in the field longer.


September 29, 2014  11:44 AM

Oracle launches all-flash SAN, backup box for databases

Dave Raffo Dave Raffo Profile: Dave Raffo
Flash Array, Oracle, Storage

Larry Ellison took aim at EMC when he introduced a flash storage array and disk backup product at his Sunday keynote at Oracle Open World.

Never mind that the world is filled with flash SANs and disk backup appliances. Ellison proclaimed the FS1 Flash Storage System and Zero Data Loss Recovery Appliance the greatest in their class for Oracle applications. He singled out EMC’s XtremIo as the flash system he is looking to compete with. While he didn’t mention EMC’s Data Domain among disk targets, that is the clear market leader.

Ellison recently stepped down as Oracle CEO to become its executive chairman and CTO, but opened the annual show Sunday night with a rundown of new Oracle products and services.

“This is our first big SAN product,” Ellison said of the FS1, adding it can scale to 16 nodes and be used an all-flash or hybrid array mixing SSDs and hard disk drives.

As with all Oracle storage systems, the FS1 is designed specifically for Oracle applications. It scales to 912 TB of flash or 2.9 PB of combined SSD and HDD capacity with 30 drive enclosures.

The system also uses Oracle QoS Plus quality of service software to place data across four storage tiers. Customers can set application profiles for Oracle Database and other Oracle enterprise apps to set automated tiering.

FS1 systems come with a base controller or performance controller, and each system supports 30 drive enclosures. A base controller includes 64 GB of RAM cache or 16 GB NV-DIMM cache, and a performance controller has either 384 GB RAM or 32 GB NV-DIMM cache.

Drive enclosures support 400 GB performance SSDs, 1.6 TB capacity SSDs, 300 GB and 900 GB performance disk drives, and 4 TB capacity disk drives. A system supports any combination of these drives. A performance SSD enclosure includes either seven 400 GB or 12 400 GB drives, a capacity SSD enclosure holds 19, 13 or seven 1.6 TB drives, a performance HDD enclosure comes with 24 300 GB or 24 900 GB 10,000 rpm drives and an HDD capacity enclosure has 24 4 TB 7,200 rpm drives.

There are dozens of all-flash and hybrid flash systems on the market, but Ellison singled out XtremIO for comparison. “It’s much faster than XtremIO, and half the price,” Ellison said. He had a chart with IOPS and throughput numbers for FS-1 and XtremIO without giving the configurations that produced those numbers.

Ellison also unveiled the Oracle Zero Data Loss Recovery appliance, proclaiming “I named this myself, it’s a very catchy name.”

Ellison said the appliance is tightly integrated with the Oracle Database and the Recovery Manager (RMAN) backup tool to exceed performance of other backup appliances. Backup data blocks are validated as the recovery appliance receives them, and again as they are copied to tape or replicated. The appliance also periodically validates blocks on disk.

He said it is superior to other disk backup targets for Oracle databases. “Backup appliances don’t work well for databases because they think of databases as a bunch of files, and databases are not a bunch of files,” he said.

The recovery appliance also includes Delta Store software that validates the incoming changed data blocks, and then compresses, indexes and stores them. The appliance holds Virtual Full Database Backups, which are space-efficient pointers of full backups in point-in-time increments.

The recovery appliance uses source-side deduplication that Oracle calls Delta Push to identify changed blocks on production databases through RMAN block change tracking. That eliminates the need to read unchanged data.

A base configuration includes two compute servers and three storage servers connected internally through high-speed InfiniBand, and scales to 14 storage servers. A base configuration holds up to 37 TB of usable capacity (before dedupe) and a full rack has 224 TB of usable capacity. Oracle claims a rack can provide up to 2.2 PB of virtual full backups. That is enough to for a 10-day recover window for 224 TB virtual full backups and logs. Up to 18 full configured racks can be connected via InfiniBand.

Oracle claims a single rack appliance can ingest changed data at 12 TB per hour, and that performance increases incrementally as racks are added.


September 29, 2014  6:30 AM

Nasuni tests storage volume with more than 1 billion files

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Generating a billion files to prove a point is no trivial task.

Nasuni Corp. claimed to have spent 15 months creating and testing a single storage volume within its service with more than a billion files, including 27,000 snapshot versions of the file system. Nasuni Filers captured the data and did the necessary protocol transformation to send it to Amazon’s Simple Storage Service (S3) and store the files as objects.

The Natick, Massachusetts-based company used a test tool to generate the files directly on the Nasuni Filers, starting with a single entry-level system and eventually ramping up to a total of four filers to ingest the data faster. The two hardware and two virtual filers were located in Natick and nearby Marlborough, Massachusetts.

Nasuni CEO Andres Rodriguez said the files were representative of customer data, based on information from the company’s tracking statistics. He said there has been pressure on his company and competitors to demonstrate the scale of file systems, as customers increasingly want to deploy a single file system across multiple locations.

“We’re going after organizations that are running, say, Windows file servers in 50 locations and each of those Windows file servers may have 20 or 30 million files,” Rodriguez said. “They’re having problems with their backups or with their Windows file servers running out of room or running out of resources.”

Rodriguez said the UniFS global file system within Nasuni Filers at each site gives them access to their millions or billions of objects stored in Amazon S3 or Microsoft Azure. He said it doesn’t matter if the Nasuni Filer is a “tiny little box” or a “tiny little machine version.” “No matter how little the Nasuni Filer is, it can still see, access, read, write the one billion files,” said Rodriguez.

How big a deal is the billion-file proof point?

Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Oregon, viewed the Nasuni test as simply a “nice marketing assertion.”

“I commend them for running the test,” he said. “But, vendors such as EMC Isilon, Joyent, Panzura and other highly scalable scale-out file systems with global namespace can also provide access to all files from any node. A Nasuni filer is slower and primarily a gateway to objects stored in Amazon S3 or Microsoft Azure.”

Nasuni provided no performance information related to the billion-file demonstration. The company said only that data input/output performance varies based on the model of Nasuni Filer used. Higher end models support higher performance than entry level units, a company spokesman said.

Steve Duplessie, founder of Enterprise Strategy Group Inc. in Milford, Mass., said via an e-mail that Nasuni takes aim at second or tier-2 files, and performance is a “non-issue” with that class of data. He said Panzura is probably closest in approach to Nasuni but plays at a different level and has a heavy hardware footprint. He said Isilon can scale to a billion files but not globally. Isilon and Panzura cater to primary tier-1 data and carry the price tag to match, he said.

“If you were performance sensitive, you should use Isilon or NetApp,” said Duplessie. “Having said that, the overwhelming percentage of data in the organization is not performance sensitive, and the cloud is a fine place to keep it.”

Gene Ruth, a research director in enterprise storage at Gartner Inc., said he fields calls on a frequent basis from legal firms, construction companies, government agencies and other clients trying to provide common access to file environments from dozens, hundreds and in some cases thousands of branch offices.

“Nasuni is addressing the bulk of the market, which is support for universal access to files – being able to get at files on any device from anywhere. You have a common authoritative source that’s synchronized in the backend that provides those files,” said Ruth. “And they’re not the only ones that can do this.”

Ruth doesn’t view Nasuni’s billion-file announcement as significant, but he does see it as an indicator of the continuing evolution of what he calls cloud-integrated storage and what others often refer to as cloud gateways.

“Nasuni’s proven a point,” said Ruth, “that incrementally they’re getting bigger and more capable and more credible in addressing a bigger audience.”


September 26, 2014  9:41 AM

All-flash pioneers holding up well despite increased competition

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

It’s been awhile since startups had the all-flash array market to themselves. All the major storage vendors now have one or more all-flash platforms. Still, the flash array pioneers still have the lead in some ways.

Two recent reports by Gartner show Pure Storage, SolidFire and Kaminario more than holding their own in the over-crowded all-flash market. Pure joined EMC and IBM in the leaders section of Gartner’s all-flash magic quadrant, and SolidFire, Pure and Kaminario have the three highest rated arrays in Gartner’s flash critical capabilities report. SolidFire and Kaminario are in the visionaries group in the magic quadrant, which looks at vendors rather than specific products and includes business considerations along with technology.

Gartner’s critical capabilities report judged 13 all-flash arrays. Gartner gave each system an overall ranking based on ecosystem (support for protocols, operating systems, hypervisors, etc.), manageability, multi-tenancy/security, performance, RAS (reliability, availability, serviceability), scalability and storage efficiency. It also ranked each array for its value in five specific use cases.

SolidFire ranked highest overall with a score of 3.43, followed by Pure at 3.41, and Kaminario at 3.36. EMC’s XtremIO and Hewlett-Packard’s 3PAR StoreServ 7450 scored highest amount the large vendors’ arrays, coming in tied for fourth at 3.32.

Pure received the highest ranking for online transaction processing, server virtualization and VDI while SolidFire took the top scores for high-performance computing and analytics.

SolidFire was lauded for its quality of service feature that delivers applications with guaranteed IOPS and broad cloud management features, while Gartner added that SolidFire is playing catch-up with traditional enterprise application integration. Gartner pointed out Pure’s good reputation for reliability, ease of use and storage data services, although its arrays have relatively low capacities. Kaminario won praise for its new inline compression, dedupe, and thin provisioning and its price of $2 per GB (when including storage efficiency) while taking hits for limited quality of service and no replication.

Gartner’s 2013 flash market share report issued earlier this year listed Pure as No. 2 behind IBM in all-flash array revenue for last year. IBM had $164.4 million in revenue from its FlashSystem with Pure raking in $114.1 million from its FlashArray platform.

The magic quadrant cautioned that Pure’s performance isn’t among the best when it comes to high IOPS and low latency, but Pure VP of products Matt Kixmoeller said that is by design. He says FlashArray was designed with storage services in mind, and those services will eventually win out in the flash market.

“From day one we’ve been focused on building the best platform for hosting many applications,” he said. “If someone is looking for a drag race between flash systems, they’re probably looking for the wrong things.

“We’ve seen a change in our business over the last six months. A lot of deployments were single application in the beginning. Once the customers got used to flash in a single application, they would use it in other apps. A lot of deals now are multi-arrays. Even if they’re a single array, they are multi-applications. Flash is now a replacement for tier one storage.”

SolidFire marketing VP Jay Prassl agreed with that. SolidFire began selling its array as storage for cloud providers, so it needed to handle multiple applications. “The separation now comes from the demand go beyond providing more speed,” Prassl said. “If I can’t put a lot of applications on here and make my life easier, than I’m managing a lot of disparate systems.”


September 24, 2014  9:25 AM

Overland increases revs, losses while waiting for Sphere 3D

Dave Raffo Dave Raffo Profile: Dave Raffo
Overland Storage, Storage

Overland Storage had significant revenue increases while continuing to lose money last quarter as it absorbed Tandberg Data while waiting to be absorbed by Sphere 3D.

Overland Tuesday reported its earnings for last quarter and its fiscal year that ended June 30, which will likely be its last annual revenue report before it merges with Sphere 3D. Overland announced the $81 million Sphere 3D acquisition in May, five months after Overland acquired Tandberg Data.

Thanks largely to Tandberg’s RDX removable disk technology, Overland increased its annual revenue 37 percent over last year to $65.7 million. Its revenue for last quarter doubled compared to the same quarter in 2013, from $12.1 million to $24.2 million.

Overland’s disk system revenue shot up to $11.5 million last quarter from $2.5 million over the previous year, including $8.2 million of RDX products and $2.4 million from its SnapServer networked storage platform. For the year, Overland recorded $14.4 million of revenue from RDX removable drives. That makes up most of its $17.5 million increase in revenue over 2013.

Overland began selling Tandberg products last January.

Overland isn’t doing nearly as well with its legacy products. SnapServer annual revenue went from $9 million to $9.5 million, and tape automation revenue declined from $16.8 million to $14.2 million.

CEO Eric Kelly said Overland is on track to become a $100 million revenue company, “which we expect to provide a clear path to profitability.”

The losses continue for now. Overland dropped $7.4 million last quarter and $22.9 million for the year. The annual loss was worse than 2013 when Overland lost $19.6 million. Overland finished its fiscal year with $12.1 million in cash and short-term investment compared to $8.8 million last year.

“We have made significant progress in transforming the company,” Overland CEO Kelly said.

More transformation is ahead. Kelly said the target date to close the pending Sphere 3D merger is the end of October. As announced in May, Sphere 3D will pay $81 million for Overland.

That deal brings a new set of questions for Overland. How does Sphere 3D – which had only $2.75 million in revenue and lost $3.4 million over the first six months of this year – justify paying $81 million for another company that has a long history of losses? And what is the status of Sphere 3D’s Glassware 2.0 virtual desktop software, which has been in development for years with little to show.

Kelly said Glassware technology has been deployed in “multiple customer environments,” and Overland and Sphere 3D are ready to extend its availability. The companies announced a deal in May with PACS vendor Novarad to sell Glassware on SnapServer DX2 appliances with Sphere 3D’s Desktop Cloud Orchestrator management software. Novarad is marketing that product as NovaGlass. However, when pressed on the earnings call, Kelly could not say if Novarad has any customers for NovaGlass yet or is still testing the product.

“If you have a radiologist out there that wants to talk to them, I’d be more than happy to make that introduction,” he said.

It will take a lot of introductions to pull Sphere 3D/Overland out of the red.


September 22, 2014  1:45 PM

EMC merger intriguing but unlikely

Dave Raffo Dave Raffo Profile: Dave Raffo
Cisco, Dell, EMC, HP, Oracle, Storage

There has been a lot of speculation about who will succeed Joe Tucci if the EMC CEO really retires next February as planned. The leading candidates were thought to be from inside the EMC federation of companies, most notably EMC information infrastructure CEO David Goulden or VMware CEO Pet Gelsinger.

Apparently the options to replace Tucci also include Meg Whitman, Michael Dell and other CEOs of large tech companies that are in talks to merge or acquire EMC, according to business publications and networks. The Wall Street Journal, Barron’s, New York Times and CNBC have all weighed in over the past two days, two weeks after the New York Post reported EMC was holding talks to merge or sell VMware.

This flurry of merger talk comes possibly from a person at large EMC shareholder Elliott Management, which has publicly urged EMC to split off VMware and perhaps other pieces. Or an EMC exec wants to make it known EMC is living up to its commitment to explore its options. In any case, there is talk happening but not necessarily any action.

The Wall Street Journal Sunday said EMC recently broke off talks to merge with HP in a deal that would make Tucci the chairman of HP while Whitman would remain CEO. Reasons cited for talks falling through include EMC asking for too much and lack of faith that both companies could get shareholders to ratify the terms. The Journal story said HP and EMC have broken off talks, although Barron’s today quoted a source saying talks could resume and “things feel imminent-ish.” According to the New York Times, a combined HP-EMC would have a market valuation of $129 billion.

The Journal story also claimed that EMC and Dell have had discussions. Dell might want VMware (what server company wouldn’t?) or select EMC storage products, but it is unlikely that Dell is big enough to absorb all of EMC.

Cisco and Oracle have also been mentioned as companies that might be interested in EMC.

Despite all the stories and all the possible suitors, it’s unlikely that EMC will be bought or merged. And Tucci has said he does not want to sell EMC’s 80 percent stake in VMware.

Buying EMC whole would be a major undertaking, and the companies mentioned have other issues to deal with in today’s challenging IT market.

For example, an HP-EMC merger would be disruptive to the storage groups in both companies. All HP storage products have direct competitors at EMC. HP would either have to dismantle its current storage portfolio or get rid of a bunch of products from both companies.

Cisco-EMC merger rumors pop up every couple of years. Cisco has been comfortable partnering for storage – mainly with EMC and NetApp – and has no track record of taking on large acquisitions.

Oracle is the wild card in this situation. So far, except for the StorageTek tape business it picked up in the Sun acquisition, Oracle’s only interest in storage is selling devices that improve performance of its software.

But with Larry Ellison pulling back to executive chairman/CTO and Mark Hurd and Safra Catz taking over as CEOs, the database giant could go in a different direction. Still, that would be a massive task as the CEOs and Ellison settle into their new roles.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: