Larry Ellison took aim at EMC when he introduced a flash storage array and disk backup product at his Sunday keynote at Oracle Open World.
Never mind that the world is filled with flash SANs and disk backup appliances. Ellison proclaimed the FS1 Flash Storage System and Zero Data Loss Recovery Appliance the greatest in their class for Oracle applications. He singled out EMC’s XtremIo as the flash system he is looking to compete with. While he didn’t mention EMC’s Data Domain among disk targets, that is the clear market leader.
Ellison recently stepped down as Oracle CEO to become its executive chairman and CTO, but opened the annual show Sunday night with a rundown of new Oracle products and services.
“This is our first big SAN product,” Ellison said of the FS1, adding it can scale to 16 nodes and be used an all-flash or hybrid array mixing SSDs and hard disk drives.
As with all Oracle storage systems, the FS1 is designed specifically for Oracle applications. It scales to 912 TB of flash or 2.9 PB of combined SSD and HDD capacity with 30 drive enclosures.
The system also uses Oracle QoS Plus quality of service software to place data across four storage tiers. Customers can set application profiles for Oracle Database and other Oracle enterprise apps to set automated tiering.
FS1 systems come with a base controller or performance controller, and each system supports 30 drive enclosures. A base controller includes 64 GB of RAM cache or 16 GB NV-DIMM cache, and a performance controller has either 384 GB RAM or 32 GB NV-DIMM cache.
Drive enclosures support 400 GB performance SSDs, 1.6 TB capacity SSDs, 300 GB and 900 GB performance disk drives, and 4 TB capacity disk drives. A system supports any combination of these drives. A performance SSD enclosure includes either seven 400 GB or 12 400 GB drives, a capacity SSD enclosure holds 19, 13 or seven 1.6 TB drives, a performance HDD enclosure comes with 24 300 GB or 24 900 GB 10,000 rpm drives and an HDD capacity enclosure has 24 4 TB 7,200 rpm drives.
There are dozens of all-flash and hybrid flash systems on the market, but Ellison singled out XtremIO for comparison. “It’s much faster than XtremIO, and half the price,” Ellison said. He had a chart with IOPS and throughput numbers for FS-1 and XtremIO without giving the configurations that produced those numbers.
Ellison also unveiled the Oracle Zero Data Loss Recovery appliance, proclaiming “I named this myself, it’s a very catchy name.”
Ellison said the appliance is tightly integrated with the Oracle Database and the Recovery Manager (RMAN) backup tool to exceed performance of other backup appliances. Backup data blocks are validated as the recovery appliance receives them, and again as they are copied to tape or replicated. The appliance also periodically validates blocks on disk.
He said it is superior to other disk backup targets for Oracle databases. “Backup appliances don’t work well for databases because they think of databases as a bunch of files, and databases are not a bunch of files,” he said.
The recovery appliance also includes Delta Store software that validates the incoming changed data blocks, and then compresses, indexes and stores them. The appliance holds Virtual Full Database Backups, which are space-efficient pointers of full backups in point-in-time increments.
The recovery appliance uses source-side deduplication that Oracle calls Delta Push to identify changed blocks on production databases through RMAN block change tracking. That eliminates the need to read unchanged data.
A base configuration includes two compute servers and three storage servers connected internally through high-speed InfiniBand, and scales to 14 storage servers. A base configuration holds up to 37 TB of usable capacity (before dedupe) and a full rack has 224 TB of usable capacity. Oracle claims a rack can provide up to 2.2 PB of virtual full backups. That is enough to for a 10-day recover window for 224 TB virtual full backups and logs. Up to 18 full configured racks can be connected via InfiniBand.
Oracle claims a single rack appliance can ingest changed data at 12 TB per hour, and that performance increases incrementally as racks are added.
Generating a billion files to prove a point is no trivial task.
Nasuni Corp. claimed to have spent 15 months creating and testing a single storage volume within its service with more than a billion files, including 27,000 snapshot versions of the file system. Nasuni Filers captured the data and did the necessary protocol transformation to send it to Amazon’s Simple Storage Service (S3) and store the files as objects.
The Natick, Massachusetts-based company used a test tool to generate the files directly on the Nasuni Filers, starting with a single entry-level system and eventually ramping up to a total of four filers to ingest the data faster. The two hardware and two virtual filers were located in Natick and nearby Marlborough, Massachusetts.
Nasuni CEO Andres Rodriguez said the files were representative of customer data, based on information from the company’s tracking statistics. He said there has been pressure on his company and competitors to demonstrate the scale of file systems, as customers increasingly want to deploy a single file system across multiple locations.
“We’re going after organizations that are running, say, Windows file servers in 50 locations and each of those Windows file servers may have 20 or 30 million files,” Rodriguez said. “They’re having problems with their backups or with their Windows file servers running out of room or running out of resources.”
Rodriguez said the UniFS global file system within Nasuni Filers at each site give them access to their millions or billions of objects stored in Amazon S3 or Microsoft Azure. He said it doesn’t matter if the Nasuni Filer is a “tiny little box” or a “tiny little machine version.” “No matter how little the Nasuni Filer is, it can still see, access, read, write the one billion files,” said Rodriguez.
How big a deal is the billion-file proof point?
Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Oregon, viewed the Nasuni test as simply a “nice marketing assertion.”
“I commend them for running the test,” he said. “But, vendors such as EMC Isilon, Joyent, Panzura and other highly scalable scale-out file systems with global namespace can also provide access to all files from any node. A Nasuni filer is slower and primarily a gateway to objects stored in Amazon S3 or Microsoft Azure.”
Nasuni provided no performance information related to the billion-file demonstration. The company said only that data input/output performance varies based on the model of Nasuni Filer used. Higher end models support higher performance than entry level units, a company spokesman said.
Steve Duplessie, founder of Enterprise Strategy Group Inc. in Milford, Mass., said via an e-mail that Nasuni takes aim at second or tier-2 files, and performance is a “non-issue” with that class of data. He said Panzura is probably closest in approach to Nasuni but plays at a different level and has a heavy hardware footprint. He said Isilon can scale to a billion files but not globally. Isilon and Panzura cater to primary tier-1 data and carry the price tag to match, he said.
“If you were performance sensitive, you should use Isilon or NetApp,” said Duplessie. “Having said that, the overwhelming percentage of data in the organization is not performance sensitive, and the cloud is a fine place to keep it.”
Gene Ruth, a research director in enterprise storage at Gartner Inc., said he fields calls on a frequent basis from legal firms, construction companies, government agencies and other clients trying to provide common access to file environments from dozens, hundreds and in some cases thousands of branch offices.
“Nasuni is addressing the bulk of the market, which is support for universal access to files – being able to get at files on any device from anywhere. You have a common authoritative source that’s synchronized in the backend that provides those files,” said Ruth. “And they’re not the only ones that can do this.”
Ruth doesn’t view Nasuni’s billion-file announcement as significant, but he does see it as an indicator of the continuing evolution of what he calls cloud-integrated storage and what others often refer to as cloud gateways.
“Nasuni’s proven a point,” said Ruth, “that incrementally they’re getting bigger and more capable and more credible in addressing a bigger audience.”
It’s been awhile since startups had the all-flash array market to themselves. All the major storage vendors now have one or more all-flash platforms. Still, the flash array pioneers still have the lead in some ways.
Two recent reports by Gartner show Pure Storage, SolidFire and Kaminario more than holding their own in the over-crowded all-flash market. Pure joined EMC and IBM in the leaders section of Gartner’s all-flash magic quadrant, and SolidFire, Pure and Kaminario have the three highest rated arrays in Gartner’s flash critical capabilities report. SolidFire and Kaminario are in the visionaries group in the magic quadrant, which looks at vendors rather than specific products and includes business considerations along with technology.
Gartner’s critical capabilities report judged 13 all-flash arrays. Gartner gave each system an overall ranking based on ecosystem (support for protocols, operating systems, hypervisors, etc.), manageability, multi-tenancy/security, performance, RAS (reliability, availability, serviceability), scalability and storage efficiency. It also ranked each array for its value in five specific use cases.
SolidFire ranked highest overall with a score of 3.43, followed by Pure at 3.41, and Kaminario at 3.36. EMC’s XtremIO and Hewlett-Packard’s 3PAR StoreServ 7450 scored highest amount the large vendors’ arrays, coming in tied for fourth at 3.32.
Pure received the highest ranking for online transaction processing, server virtualization and VDI while SolidFire took the top scores for high-performance computing and analytics.
SolidFire was lauded for its quality of service feature that delivers applications with guaranteed IOPS and broad cloud management features, while Gartner added that SolidFire is playing catch-up with traditional enterprise application integration. Gartner pointed out Pure’s good reputation for reliability, ease of use and storage data services, although its arrays have relatively low capacities. Kaminario won praise for its new inline compression, dedupe, and thin provisioning and its price of $2 per GB (when including storage efficiency) while taking hits for limited quality of service and no replication.
Gartner’s 2013 flash market share report issued earlier this year listed Pure as No. 2 behind IBM in all-flash array revenue for last year. IBM had $164.4 million in revenue from its FlashSystem with Pure raking in $114.1 million from its FlashArray platform.
The magic quadrant cautioned that Pure’s performance isn’t among the best when it comes to high IOPS and low latency, but Pure VP of products Matt Kixmoeller said that is by design. He says FlashArray was designed with storage services in mind, and those services will eventually win out in the flash market.
“From day one we’ve been focused on building the best platform for hosting many applications,” he said. “If someone is looking for a drag race between flash systems, they’re probably looking for the wrong things.
“We’ve seen a change in our business over the last six months. A lot of deployments were single application in the beginning. Once the customers got used to flash in a single application, they would use it in other apps. A lot of deals now are multi-arrays. Even if they’re a single array, they are multi-applications. Flash is now a replacement for tier one storage.”
SolidFire marketing VP Jay Prassl agreed with that. SolidFire began selling its array as storage for cloud providers, so it needed to handle multiple applications. “The separation now comes from the demand go beyond providing more speed,” Prassl said. “If I can’t put a lot of applications on here and make my life easier, than I’m managing a lot of disparate systems.”
Overland Storage had significant revenue increases while continuing to lose money last quarter as it absorbed Tandberg Data while waiting to be absorbed by Sphere 3D.
Overland Tuesday reported its earnings for last quarter and its fiscal year that ended June 30, which will likely be its last annual revenue report before it merges with Sphere 3D. Overland announced the $81 million Sphere 3D acquisition in May, five months after Overland acquired Tandberg Data.
Thanks largely to Tandberg’s RDX removable disk technology, Overland increased its annual revenue 37 percent over last year to $65.7 million. Its revenue for last quarter doubled compared to the same quarter in 2013, from $12.1 million to $24.2 million.
Overland’s disk system revenue shot up to $11.5 million last quarter from $2.5 million over the previous year, including $8.2 million of RDX products and $2.4 million from its SnapServer networked storage platform. For the year, Overland recorded $14.4 million of revenue from RDX removable drives. That makes up most of its $17.5 million increase in revenue over 2013.
Overland began selling Tandberg products last January.
Overland isn’t doing nearly as well with its legacy products. SnapServer annual revenue went from $9 million to $9.5 million, and tape automation revenue declined from $16.8 million to $14.2 million.
CEO Eric Kelly said Overland is on track to become a $100 million revenue company, “which we expect to provide a clear path to profitability.”
The losses continue for now. Overland dropped $7.4 million last quarter and $22.9 million for the year. The annual loss was worse than 2013 when Overland lost $19.6 million. Overland finished its fiscal year with $12.1 million in cash and short-term investment compared to $8.8 million last year.
“We have made significant progress in transforming the company,” Overland CEO Kelly said.
More transformation is ahead. Kelly said the target date to close the pending Sphere 3D merger is the end of October. As announced in May, Sphere 3D will pay $81 million for Overland.
That deal brings a new set of questions for Overland. How does Sphere 3D – which had only $2.75 million in revenue and lost $3.4 million over the first six months of this year – justify paying $81 million for another company that has a long history of losses? And what is the status of Sphere 3D’s Glassware 2.0 virtual desktop software, which has been in development for years with little to show.
Kelly said Glassware technology has been deployed in “multiple customer environments,” and Overland and Sphere 3D are ready to extend its availability. The companies announced a deal in May with PACS vendor Novarad to sell Glassware on SnapServer DX2 appliances with Sphere 3D’s Desktop Cloud Orchestrator management software. Novarad is marketing that product as NovaGlass. However, when pressed on the earnings call, Kelly could not say if Novarad has any customers for NovaGlass yet or is still testing the product.
“If you have a radiologist out there that wants to talk to them, I’d be more than happy to make that introduction,” he said.
It will take a lot of introductions to pull Sphere 3D/Overland out of the red.
There has been a lot of speculation about who will succeed Joe Tucci if the EMC CEO really retires next February as planned. The leading candidates were thought to be from inside the EMC federation of companies, most notably EMC information infrastructure CEO David Goulden or VMware CEO Pet Gelsinger.
Apparently the options to replace Tucci also include Meg Whitman, Michael Dell and other CEOs of large tech companies that are in talks to merge or acquire EMC, according to business publications and networks. The Wall Street Journal, Barron’s, New York Times and CNBC have all weighed in over the past two days, two weeks after the New York Post reported EMC was holding talks to merge or sell VMware.
This flurry of merger talk comes possibly from a person at large EMC shareholder Elliott Management, which has publicly urged EMC to split off VMware and perhaps other pieces. Or an EMC exec wants to make it known EMC is living up to its commitment to explore its options. In any case, there is talk happening but not necessarily any action.
The Wall Street Journal Sunday said EMC recently broke off talks to merge with HP in a deal that would make Tucci the chairman of HP while Whitman would remain CEO. Reasons cited for talks falling through include EMC asking for too much and lack of faith that both companies could get shareholders to ratify the terms. The Journal story said HP and EMC have broken off talks, although Barron’s today quoted a source saying talks could resume and “things feel imminent-ish.” According to the New York Times, a combined HP-EMC would have a market valuation of $129 billion.
The Journal story also claimed that EMC and Dell have had discussions. Dell might want VMware (what server company wouldn’t?) or select EMC storage products, but it is unlikely that Dell is big enough to absorb all of EMC.
Cisco and Oracle have also been mentioned as companies that might be interested in EMC.
Despite all the stories and all the possible suitors, it’s unlikely that EMC will be bought or merged. And Tucci has said he does not want to sell EMC’s 80 percent stake in VMware.
Buying EMC whole would be a major undertaking, and the companies mentioned have other issues to deal with in today’s challenging IT market.
For example, an HP-EMC merger would be disruptive to the storage groups in both companies. All HP storage products have direct competitors at EMC. HP would either have to dismantle its current storage portfolio or get rid of a bunch of products from both companies.
Cisco-EMC merger rumors pop up every couple of years. Cisco has been comfortable partnering for storage – mainly with EMC and NetApp – and has no track record of taking on large acquisitions.
Oracle is the wild card in this situation. So far, except for the StorageTek tape business it picked up in the Sun acquisition, Oracle’s only interest in storage is selling devices that improve performance of its software.
But with Larry Ellison pulling back to executive chairman/CTO and Mark Hurd and Safra Catz taking over as CEOs, the database giant could go in a different direction. Still, that would be a massive task as the CEOs and Ellison settle into their new roles.
The backup appliance market rebounded last quarter following a year-over-year decline in the first quarter of the year, according to IDC’s quarterly tracker numbers.
The backup appliance market hit $783.2 million in revenue last quarter, up 8.5% over the same quarter last year. That follows a 2.5 percent decline to $664.5 million for the first quarter of 2014 for what IDC calls the purpose-built backup appliance (PBBA) market.
The rise in backup appliance revenue came in a quarter in which disk storage systems declined, so the gains were not part of overall increased spending. However, data protection software revenue also increased 10.2% year-over-year last quarter, according to another IDC report.
Robert Amatruda, IDC’s research director for data protection and recovery, maintains the rebound shows the value that appliances bring to backup – especially as companies begin preparing to incorporate the cloud.
“I believe that appliances bring measureable value to the data protection process,” Amatruda wrote in an e-mail. “PBBA systems are turnkey, highly tuned for backup and recovery. Also, they will be instrumental in the new era of cloud backup. PBBAs will help facilitate the movement of data on and off premise.”
EMC maintained its overall lead with $498.7 million for 63.7 percent of the market, thanks to its Data Domain platform. EMC’s revenue grew 10.7 percent and its market share ticked up 1.3 percent from last year and 4.8 percent over the first quarter of 2014.
No. 2 Symantec made a bigger jump with $108.5 million for 21.9 percent growth. Symantec’s revenue came mostly from its NetBackup appliances, and enabled the vendor to move from 12.3 percent of the market in the second quarter last year to 13.9 percent in 2014. The quarter was a big turnaround for Symantec, which declined 10.9 percent year-over-year in the first quarter of 2014.
IBM remained third with $53.6 million in revenue, up 2.3 percent from last year. No. 4 Hewlett-Packard dropped 20.1 percent to $30.6 million, and its share fell from 5.3 percent to 3.9 percent. One reason for the big decline is HP discontinued its VLS enterprise backup target that it sold through an OEM deal with Sepaton. HP is now solely focused on its internally developed StoreOnce appliances.
“HP needs to continue to aggressively market and propagate StoreOnce inside and outside of its installed base,” Amatruda wrote. “HP has great technology and broad portfolio. There’s no reason it should not be growing faster.”
Barracuda moved into sole possession of fifth place with 35.7 percent growth to $16 million. That gives it two percent of the market – up from 1.6 percent a year ago. Barracuda was in a statistical tie with Quantum in the first quarter of 2014.
You think you have high storage capacity needs? Well, this week the National Center for Super Computing Applications (NCSA) added 20 PB – that’s right, petabytes – of tape capacity for online data for its Blue Waters supercomputer. And that should last about a year.
The NCSA uses four SpectraLogic 17-frame T-Finity tape libraries with IBM TS1140 tape drives for all of its active archiving for the Blue Waters super computer that went into operation a year ago. That setup can store 380 PB, which should be enough for Blue Waters’ expected five-year lifespan. Michelle Butler, NCSA senior technical program manager, said Blue Waters’ archiving data is expected to grow by about 60% to 70% per year.
The SpectraLogic libraries are connected to a Cray-branded Seagate high performance computing disk system that holds 35 PB of raw data and 25 PB usable.
“We need to be able to stay ahead of our users,” Butler said. “We are continuously growing, but we have stored a little less than 20 PB in data in our first year.”
Blue Waters, based at the University of Illinois at Urbana-Champaign, is used for a variety of research data. Applications include weather prediction and analyzing how the cosmos developed after the Big Bang. Butler said there are 36 teams of from 10 to 20 researchers per team that use the system, with between 100 and 200 users online at any time. The supercomputer includes 28 systems dedicated to online data movement and 50 for nearline data movement using one or two 40-Gigabit Ethernet cards.
Butler said NCSA chose tape for archiving because hundreds of petabytes of disk would be too costly. It is also easy to non-disruptively add capacity to the tape libraries.
Writing that much data to tape efficiently and concurrently did require the Blue Waters team to write a RAIT (Redundant Array of Inexpensive Tapes) utility for its IBM HPSS (high performance storage system) hierarchical storage management system.
RAIT enables nine wide data stripes and allows for the loss of two tapes without losing access to data.
“With RAIT, we can stripe data and still protect it,” she said. “We needed to strip data and write data extremely fast to the tape drive. We couldn’t single-stream files, that’s too slow. But we didn’t want to lose users’ data if a tape or drive fails. With a seven-wide stripe of data, that would be 28 TB of data if we drop a tape. Now we can do seven wide stripes of data and two wide stripes of parity. We can lose two tapes and still continue to retrieve users’ data.”
Now in its twelfth year, the Storage magazine/SearchStorage.com Products of the Year competition is open for entries. If you’re a vendor, a PR firm representing a vendor or just a very satisfied user, fill out the entry form today to make sure your product is considered for one of these prestigious awards.
As you may recall, our Products of the Year program is a bit different from some other “best products” or “user choice” awards—we focus on what you have done lately, so we consider new products that debuted during the last year or existing products that were significantly revved during that time.
This year, our panel of judges will consider products in five categories:
- Storage Systems – SAN/NAS/multi-protocol systems, converged/hyper-converged infrastructure products, HDDS and SDDs, disk controllers, caching appliances, storage virtualization appliances, cloud-integrated storage
- Backup Hardware – tape libraries/drives, backup media, disk backup targets, VTL, dedupe devices, cloud backup gateways
- Backup and DR Software and Services – backup/recovery software, cloud backup/recovery services, disaster recovery, snapshot and replication, electronic vaulting, archivers
- Storage Management Tools – SRM/SAN management software, performance monitoring, configuration management, provisioning, data reduction
- Storage System Software – file systems, volume managers, storage virtualization software, security software, storage optimization, solid-state caching, cloud storage software, software-defined storage
Our panel of judges includes some of the most knowledgeable and best known storage industry experts—analysts, consultants, users and editors.
Don’t let this opportunity to gain some recognition for your product slip by—the deadline for entries is Friday, October 3 at 5:00pm PST. You can enter as many products as you like, but please don’t enter the same product in multiple categories (that’s a no-no!). Included on the entry form page are links to additional information: You can get more information about the judging criteria for the product evaluations, as well as some entry submission tips from the judges.
Don’t wait—enter now. And good luck!
Terms of the agreement were not disclosed but Axcient already has integrated DirectRestore’s technology for Microsoft Exchange servers. The company intends to integrate the capabilities into its cloud platform for SQL Server and SharePoint databases over the next two quarters, with other applications to follow..
“DirectRestore specializes in granular application recovery and they support more than 100 application formats today,” Axcient CEO Justin Moore said. “This is very synergistic with our existing platform. We now have the team that has deep experience in granular recovery so we can expand Axcient to other applications.”
Axcien claims more than 3,000 customers on its platform. It takes a full image copy of servers that customers can access and it can restore data from any device when systems go down.
With the DirectRestore capabilities, Moore said the Axcient platform will be able to do granular recovery down to the object level along with files and folders. That means customers won’t have to buy individual products for specific applications.
“There are only two or three companies that have the type of technology that DirectRestore has. Anyone that has granular technology, almost all OEM it,” said Moore. “(This acquisition) not only gives us control of the technology but by integrating it into our technology, we control the entire stack.”
Moore said granular recovery is available now for Microsoft Exchange servers, and has about 500 customers using it.
DirectRestore’s 15-member engineering team will join Axcient, and Moore said he intends to double the group over the next 12 months.
“Our DNA has been in the small to medium-sized businesses but with the (new) technology we want to take on larger customers that can get access to all of our functionality,” said Moore. “The granular recovery technology will expand beyond the products we have today.”
The industry’s two hard drive vendors had a busy week with product rollouts and future-gazing.
Western Digital’s HGST division launched a flurry of products Tuesday, including 8 TB and 10 TB helium drives, nonvolatile memory express (NVMe) PCIe flash drives, and flash software. It also revealed its air-filled UltraStar hard drives will be replaced by the helium drives after the current generation.
Rival Seagate launched a raft of products Wednesday, and talked about its business strategy today at an analyst day.
Seagate’s new products include the ClusterStor 9000 Lustre-based high-performance computing system that it gained from its Xyratex acquisition; an EVault Enterprise Backup and Recovery Appliance that handles up to 100 TB of usable capacity with data reduction; Nytro XP6302 (1.75 TB usable) and XP209 (1.86 TB usable) PCIe flash cards; and 15,000 rpm and 10,000 rpm hard drives, including a 2 TB 2.5-inch drive.
While HGST is pushing hard to expand into solid-state storage, Seagate seems more interested in refining its hard drive technology for the cloud. It did buy LSI’s flash controller technology to “control NAND better than anybody else,” as Seagate president of operations and technology Dave Mosley said, but its executives seemed most excited about the cloud during analyst day.
“I think people accept that the cloud architecture will be the architecture of the future,” Seagate CEO Steve Luczo said.
“Cloud storage is the thing that’s really exploding,” Mosley added.
Seagate set up a cloud systems and solutions division this week, headed by Cisco veteran Jamie Lerner. Its Kinetic open storage architecture revealed last year is also built largely for the cloud.
CFO Patrick O’Malley said cloud and flash products can bring Seagate $2 billion in revenue over the next two years, before adding the hard disk drive business “is [still] a story of growth.”
Luczo played down the emergence of flash in enterprise storage, saying nearly all of the flash in use is connected to computers (clients and servers) rather than the storage bus.
Mosley said Seagate is on track to deliver a 20 TB hard drive by 2020. That’s twice the capacity of the largest HGST rolled out this week, and 2.5-times the 8 TB drive Seagate announced last month.