Storage Soup


October 10, 2014  2:27 PM

A look at access methods for open systems and mainframes

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

The term access method is frequently used to identify types of I/O in open systems. Many who use it probably don’t understand the historical context for what has been known as an access method for over 50 years.  In open systems, the types of I/O are for block data, file data, and object data. Access methods represent how the types of data are stored on devices.

The term access method comes from the mainframe world and denotes a number of well known (at least to those who have worked with mainframes) means to store or access information. Access methods are really software routines accessed by application programs using software commands that are inline calls to system functions. You can call these Application Program Interfaces (APIs). The closest equivalent function in open systems would be a device driver.

There are many types of access methods and most deal with how data is organized, usually in the form of records, which are typically fixed length blocks of data in a dataset.

Some the familiar access methods for storage in the mainframe world include:

  • BSAM – Basic Sequential Access Method
  • QSAM – Queued Sequential Access Method
  • BDAM – Basic Direct Access Method
  • BPAM – Basic Partitioned Access Method
  • ISAM – Index Sequential Access Method
  • VSAM – Virtual Storage Access Method
  • OAM – Object Access Method

An example of doing I/O in an application in QSAM would be to set up buffers in memory for queued I/O (multiple records in a block) and then do a GET or PUT.  Interestingly, the basic I/O for S3 object access is GET and PUT.

Open systems access methods are termed:

  • Block – individual blocks of data are read or written from/to storage
  • File – a stream of bytes that represent a file with associated file metadata is written or read within the organization of a hierarchical tree structure.
  • Object – data segments and user or system-defined metadata is stored in a flat namespace with access through object ID resolution.

The open systems access methods don’t map directly to those in the mainframe world, but you can understand them if you know the mainframe methods. The term access method in open systems isn’t wrong, it just means a slightly different thing. Translating between the two will help understand the meaning.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

October 9, 2014  4:57 PM

Symantec follows HP down breakup path

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage, Symantec, Veritas

The divorce rate for IT vendors spiked this week. Hewlett-Packard and Symantec said they are separating into pieces, and it might take a marriage counselor to keep EMC together.

Symantec today confirmed it is splitting off its information management business from the security business. The security company will keep the Symantec name, while the Information Management company has no name yet. The split is scheduled to complete by the end of 2015.

The Information Management arm will be a storage vendor, with products in backup and recovery, archiving, eDiscovery, storage management, and information availability solutions. John Gannon, who retired as Quantum COO in 2005 and also led HP’s personal computing division, becomes general manager of the new storage company.

Michael Brown, named Symantec’s permanent CEO last month, will continue to run Symantec.

“We’re confident this is the right thing from a strategy standpoint,” Brown said.

Brown said Symantec’s leadership team decided it was too difficult to remain a market leader in security and data management, and that led to the breakup decision. The security and storage companies came together in 2005 when Symantec acquired Veritas for $13.5 billion, but there have been intermittent rumors that the backup business would be spun off or sold for years.

The security part of the business has been the bigger piece of Symantec, with $4.2 billion of revenue in fiscal year 2014 compared to $2.5 billion for information management.

So, does anyone think they should call the new information management company Veritas?


October 9, 2014  3:26 PM

EMC claims products work better together despite a call for breakup

Dave Raffo Dave Raffo Profile: Dave Raffo
Pivotal, RSA, Storage, VMware

EMC has issued two responses to the letter that investor Elliott Management made public Wednesday calling for the vendor to spin off VMware and/or explore a merger with other large companies.

EMC first released a direct response to the Elliott letter, saying little except to repeat claims that EMC is exploring options but believes its strategy is sound.

An indirect response did a better job of making EMC’s case for keeping its federation of EMC, VMware, RSA, and Pivotal together. That response came today in the form of a release touting its Federation Software-Defined Data Center Solution.

The solution is little more than a combination of products from EMC’s companies with extras such as a self-service portal and scripts to tie them together. But the concept shows how the parts of the EMC Federation work together, testing the products at the federation’s engineering lab on the VMware campus, and putting pieces together to solve distinct data center problems.

Is it a coincidence that the data center solution release came one day after Elliott’s letter to CEO Joe Tucci and the EMC board questioning the value of EMC keeping everything under one umbrella? Bharat Badrinath. EMC’s Senior Director of Global Solutions Marketing, isn’t saying.

“That’s something Joe and the board will determine,” he said of the spinout and merger issue.

Badrinath’s job is pushing products, not mergers. EMC’s solution announcement also provided this list of EMC Federation products brought together as part of the software-defined data center solution:

  • Management and Orchestration: VMware vCloud Automation Center, VMware vCenter Operations Management, VMware IT Business Management, EMC Storage Resource Manager
  • Hypervisor: VMware vSphere, the industry’s most widely deployed virtualization platform
  • Networking: VMware NSX, the network virtualization and security platform for the software-defined data center.  VMware NSX brings virtualization to existing networks and transforms network operations and economics
  • Storage: Designed for EMC ViPR & EMC Storage, EMC Storage Resource Manager, VMware Virtual SAN.
  • Hybrid Cloud Deployment Models: Connectivity to VMware vCloud Air
  • Choice of Hardware: Built on converged infrastructure and can be deployed on a   variety of hardware including VCE Vblock and VSPEX.
  • PaaS: Delivering Platform-as-a-Service with Pivotal CF
  • Documented Reference Architectures

The point EMC wants to make is these products from different parts of the federation are intertwined and cannot be broken apart without harm.

“We have four strategically aligned companies which are working together at times, but there are also times when they are independent and operate on their own,” Badrinath said. “Customers can pick products developed independently or together. It’s all about us being better together or bringing the best of the best within the four businesses.”

Other solutions that will follow include Platform-as-a-Service, End-User Computing, Virtualized Data Lake and Security Analytics. Badrinath said they all should be available by early 2015.

Badrinath said the testing for the software-defined data center portion of the program took 40,000-person hours of engineering across federation companies. He also emphasized that EMC and VMware continue to work with outside partners, even if those partners such as Microsoft or other storage vendors compete with federation companies at times.

While the federation’s software-defined data center initiative has been going on for months, the release sounds as if it were put together to counter specific complaints from Elliott. The letter, signed by Elliott portfolio manager Jesse Cohn, said the EMC storage company and VMware “hinder one another” because they compete in areas, and the relationship prevents them from developing other critical relationships. Cohen said EMC’s stock is underperforming, the company is undervalued, and EMC and VMware would both be better off apart.

“As time passes, this untenable situation is going to get worse,” he wrote to EMC.


October 3, 2014  9:56 PM

Red Hat faces long-term decisions on Gluster, Ceph in storage portfolio

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

While launching the latest version of Red Hat Storage Server yesterday, the vendor provided little insight into the long-term positioning of its storage software portfolio and the chances that it might combine its Gluster-based Storage Server and Inktank Ceph Enterprise product lines.

Ranga Rangachari, vice president and general manager of storage and big data at Red Hat, said the company hopes to “get back to our customers and partners in the very near future with a consolidated vision of where this journey is going.” He addressed the topic in response to a question during the company’s Webcast entitled “Advancing software-defined storage,” which he said customers view as the ability to take advantage of industry-standard x86 servers with the intelligence resting in the software.

Rangachari noted simply that Red Hat’s acquisition of Inktank Storage Inc. this year brought object- and block-based storage to the table and complemented the file system capabilities the company gained through its 2011 acquisition of Gluster Inc.

Gluster had sold a supported version of the open source GlusterFS distributed file system in much the same way that Inktank sold a supported version of open source Ceph. Any innovative software development work rests with their respective open source project communities.

“The Gluster and the Ceph communities continue to thrive independently and thrive really well,” said Rangachari, claiming that Gluster and Ceph combined for almost two million downloads during the last nine months. “The innovation that’s going on on both those projects will continue to happen unabated.”

Red Hat put out new versions of each of the commercially supported products this year. Storage Server 3, launched yesterday, is based on open source GlusterFS 3.6 and adds support for snapshots, multi-petabyte scale-out capacity, flash drives and Hadoop-based data analytics. Inktank Ceph Enterprise 1.2, released in July, was based on open source Ceph’s Firefly release and added erasure coding, cache tiering and updated tools to manage and monitor the distributed object storage cluster.

The Ceph open source project claims to be a unified system providing object, block and file system storage. Ceph’s file system runs on top of the same object storage system that provides object storage and block device interfaces, according to the project’s Web site.

“It’s fair to say that file is probably the least well evolved of those three,” said Simon Robinson, a research vice president in storage at New York-based 451 Research LLC. “The file capability is very immature. It’s not enterprise-grade.”

But, as the Ceph technology improves, Red Hat will need to confront the question of whether to continue to focus on Gluster and Ceph, said Robinson.

“I think Red Hat’s bet buying Gluster was, ‘Hey, look at all this unstructured data. Look how quickly it’s growing. We need a play here.’ Three years ago, that play was NAS. Today it looks slightly different,” said Robinson. “When we think about the growth of unstructured data, it’s actually object that is seen as the future architecture rather than NAS.”

He cited Amazon and Microsoft Azure as proof points of the object model working at scale. “It’s just a case of how does that percolate down into the enterprise. It will take time,” he said.

Robinson said he doesn’t think it makes sense for Red Hat to physically merge Gluster and Ceph. He predicted that if Red Hat Storage does catch on, its success will be through Ceph – “the darling of the storage startup world” – tied to the broader success of the open source OpenStack cloud technology platform. Ceph has already started to gain momentum among cloud service providers, he said.

“Everybody’s playing with OpenStack, and if you’re playing with OpenStack, you’ve probably heard of Ceph. And Ceph has the interest of the broader storage community,” said Robinson. “Other big players are really interested in making Ceph a success. That works for Red Hat’s advantage.”

Henry Baltazar, a senior analyst at Cambridge, Massachusetts-based Forrester Research Inc., said he sees no problem with Red Hat having Gluster-based file and Ceph-based block and object storage options at this point, since the company doesn’t have much market share.

“They’re going to have two platforms in the foreseeable future. Those aren’t going to merge,” predicted Baltazar. “Gluster is definitely the file storage type. There are ways they could use it that can complement Ceph. It still remains to be seen where it will wind up 10 years from now.”


October 3, 2014  4:21 PM

EMC links arms with CloudLink for DRaas

Dave Raffo Dave Raffo Profile: Dave Raffo
Disaster Recovery, DRaaS, EMC, Storage

EMC is aiming its new RecoverPoint for Virtual Machines at cloud DR, in partnership with cloud security vendor CloudLink Technologies.

RecoverPoint for Virtual Machines is a hypervisor-based version of EMC’s RecoverPoint replication software. It will be generally available Nov. 17. EMC will also integrate the product with CloudLinksSecureVSA, which provides encryption for data at rest and data in motion.

The combined products can allow service providers to build DR as a Service (DRaaS), and enterprises can use them to replicate data to private or public clouds for DR.

RecoverPoint for Virtual Machines is a software-only product. Unlike previous versions of RecoverPoint, it is storage-agnostic so it doesn’t require EMC arrays to run. It works with any VWware certified storage. It is not hypervisor-agnostic yet, though. It supports VMware vSphere today with support for Microsoft Hyper-V and KVM hypervisors on the roadmap.

It is EMC’s first replication software that works on the individual VM level. Instead of replicating storage LUNs as other RecoverPoint versions do, RecoverPoint for Virtual Machines splits and replicates writes for VMware vSphere VMs. It requires splitter code on each ESX (versions 5.1 and above) node running protected VMs, and at least one virtual appliance at each site. Customers can replicate VMs regardless of hardware running at either end.

CloudLinksSecure VSA adds security. It allows customers to store and manage encryption keys on-premise.

“One of the big inhibitors of going to a public cloud is security,” said Jean Banko, director of product marketing for EMC’s data protection division. “That’s why we partnered with CloudLink.”


October 3, 2014  2:45 PM

Nimble, Actifio show their Oracle chops

Dave Raffo Dave Raffo Profile: Dave Raffo
Actifio, Oracle OpenWorld, Storage

Oracle wasn’t the only vendor to toot its own storage horn at Oracle OpenWorld this week. A couple of smaller vendors played up their budding relationship with the database giant to show how they are growing their number of enterprise customers.

Array vendor Nimble Storage added a pre-validated SmartStack reference architecture for Oracle’s JD Edwards EnterpriseOne application and copy data management pioneer Actifio increased its integration with Oracle apps to attract more database administrator customers.

Nimble first introduced its SmartStack reference architectures in late 2012 through a partnership with Cisco, and claims more than 200 SmartStack customers. The JD Edwards version consists of Nimble’s CS300 storage array, Cisco UCS Mini Server, Oracle VM, Oracle Linux, Oracle Database, and JD Edwards EnterpriseOne 9.1.

Nimble’s previous SmartStack flavors include VDI for Citrix and VMware, Microsoft Critical Applications, Oracle Critical Applications, Desktop/Server Virtualization, Server Virtualization/Private Cloud and Data Protection with CommVault.

Radhika Krishnan, Nimble VP of product marketing and alliances, said the JD Edwards SmartStack came about because more enterprises are starting to deploy Nimble storage, and JD Edwards can be a tricky app to size correctly.

“Sizing tends to be challenging, depending on the number of end users you have,” she said.

Actifio used the conference to show off its expanded support for Oracle applications, allowing it to become an Oracle Platinum Level partner. The expanded support for Actifio CDS includes Oracle Database 12c, RMAN and Exadata products.

The value of Actifio’s software is it allows organizations to uses one copy of data for production, backup, test/development or any other application.

Actifio reps said they modified their platform’s workflows to enable DBAs to automate the data flow, with the help of RESTful APIs.

Oracle DBAs can use the automated workflow to provide copies of their databases to developers in minutes, according to Actifio senior director of global marketing Andre Gilman.

“I call it RMAN on steroids,” he said. “The old school way can take days to weeks, and even with newer technologies it takes hours.

“You can create a live clone of a database and update a virtual image to all your developers at the same time. You don’t have to repeat the process as part of daily maintenance. You put it in as one workflow and it’s all automated.”

Actifio director of product marketing Chris Carrier said the integration came from months of co-development work with Oracle. He said Actifio uses RMAN to do its change-block tracking and built its LogSmart log management system specifically around Oracle, although it works with other databases. “If you show Oracle DBAs a different way to manage data, they get nervous. But if you’re leveraging RMAN, they like that,” Carrier said.


September 30, 2014  4:04 PM

EMC’s XtremIO 3.0 adds compression with disruptive upgrade for existing users

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

EMC Corp. made available the 3.0 release of its XtremIO all-flash array today with new inline compression capabilities and performance improvements – but existing customers who want the software upgrade need to prepare for significant disruption to their production environments.

Josh Goldstein, vice president of marketing and product management for EMC’s XtremIO business unit, confirmed that users will need to move all of their data off of their XtremIO arrays to do the upgrade and then move it back onto the system once the work is complete.

EMC’s XtremIO division has taken some heat on the disruptive – and some say “destructive” – nature of the upgrade, especially in view of the company’s prior claims that the product supported non-disruptive upgrades (NDU) of software.

Goldstein said the company decided to make an exception for the 3.0 upgrade based on customer input about the inline compression capabilities, which he claimed could double the usable capacity of an XtremIO array in many cases.

“This was a choice that was made, and it was not an easy choice,” said Goldstein. “We could have delayed the feature. Originally we were planning to put this in later in the roadmap. If we had chosen to, we could have tied this to another hardware release, and it would have been something that existing customers could never take advantage of. Our customer base told us emphatically that that was not what they wanted.”

Goldstein said that EMC will provide, at no cost to customers, the option of professional services and extra XtremIO “swing” capacity, if necessary, to ensure that they have an equivalent system on which to put their data while the upgrade is taking place.

The disruptive nature of the 3.0 upgrade came to light recently through an “XtremIO Gotcha” blog post from Andrew Dauncey, a leader of the Melbourne, Australia, VMware user group (VMUG). Dauncey wrote: “As a customer with limited funds, this is the only array for a VDI project, where the business runs 24/7, so to have to wipe the array has massive impacts.” He said a systems integrator had offered a loan device to help with the upgrade.

Dauncey worked as a systems engineer at a public hospital in Australia at the time of his initial post on Sept. 14. He has since gone to work for IBM as a virtualization specialist.

In a blog post last Sunday, Dauncey noted EMC’s “marketing collateral” that advertised “non-disruptive software and firmware upgrades to ensure 7×24 continuous operations,” and he accused EMC of “false advertising” prior to the release of the updated 3.0 firmware.

Goldstein said, “The releases that we’ve had from the time the product went GA up until now were all NDU. The releases that we have going forward after this point will all be NDU as well.”

Chris Evans, an IT consultant who writes the blog “Architecting IT,” said via an e-mail that HP upgraded its platform to cater to flash, and SolidFire released a new on-disk structure in the operating system for its all-flash array without disruptive upgrades. He said, what’s surprising in the XtremIO case is “that EMC didn’t foresee the volume of memory to store metadata only 12 months after their first release of code.”

Chad Sakac, a senior vice president of global systems engineering at EMC, shed some light on the technical underpinnings of the upgrade through his “Virtual Geek” personal blog, which he said has no affiliation to the company. He said the 2.4 to 3.0 upgrade touches both the layout structure and metadata indirection layer, and as a result, is disruptive to the host. He pointed to what he said were similar examples from “the vendor ecosystem.”

Goldstein confirmed that the block size is changing from 4KB to 8 KB, but he said the block-size change is not the main reason for the disruptive upgrade. He said it’s “all these things taken together” that the company is doing to both add compression and improve performance.

“We already had inline deduplication in the array, and that means that you have to have metadata structures that can describe how to take unique blocks and reconstitute them into the information that the customers originally stored,” Goldstein said. “When you add inline compression, you have to have similar metadata information about how to reconstitute compressed blocks into what the customer originally stored. Those kinds of changes are things that change the data structures in the array, and that’s what we had to update.”

Goldstein said customers should not have to endure an outage in the “vast majority of cases.” Goldstein claimed that XtremIO is “overwhelmingly used” in virtual environments and moving virtualized workloads is not difficult. He mentioned VMware’s Storage VMotion and EMC’s PowerPath Migration Enabler as two of the main options to help, but he said there are others.

Customers also may choose to remain on the 2.4 code that EMC released in May. Goldstein said that EMC will continue to provide bug fixes on the prior 2.4 release for “quite a long time.”

“There’s nothing forcing them to upgrade,” he said.

Craig Englund, principal architect at Boston Scientific Corp., said EMC contacted Boston Scientific’s management about the disruptive upgrade in the spring. At the time, the IT team already had a loaner array for test purposes, and they asked to keep it longer after learning the 3.0 upgrade was destructive.

“It reformats the array. It’s destructive. It’s not just disruptive,” Englund said. “You have to move all of your data off the array for them to perform the upgrade.”

But, Englund said the team can “move things around storage-wise non-disruptively” because the environment is highly virtualized through VMware. He said he’s willing to go through the inconvenience to gain the ability to run more virtual desktops and SQL Server databases on the company’s existing XtremIO hardware. Early tests have shown a 1.9 to 1 capacity improvement for the database workloads and 1.3 to 1 for VDI, he said.

“They could have said, ‘If you want these new features, it’s coming out in the next hardware platform, and you’ll have to buy another frame to get it.’ But, they didn’t, and I think that’s great,” Englund said. “To try to get this out to all of the existing customers before they get too many workloads on them, I think, was considerate.”


September 30, 2014  2:28 PM

All those video cameras can be a boon for storage companies

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Whenever Big Brother is watching, there is a storage vendor eager to store whatever Big Brother is seeing. And today, Big Brother is watching more places than ever before.

The video surveillance market consisting of cameras, software, DVRs, storage and other hardware is expected to reach $26 billion by 2018 and is growing twice as fast as the overall IT market, according to market research firm IHS. The main accelerators of that growth projection are the common use of surveillance in markets such as government, city surveillance and transportation, and the transformation of video from analog to more capacity-hungry digital.

So it’s no surprise that storage stalwarts EMC and Seagate are pushing hard into video surveillance.

EMC today launched a video surveillance practice, which includes a VNX array and partnerships tailored to the market. The VNX-VSS100 is configured for video surveillance cameras on the edge, with 4 TB nearline SAS drives and a mix of memory and connectivity to handle video files. The VNX-VSS100 comes in 24 TB and 120 TB configurations, and has been validated with video surveillance software and cameras, according to Michael Gallant, senior director of EMC’s video surveillance practice.

EMC has also tested its Isilon scale-out NAS arrays for core storage of video surveillance data in the data center. EMC has tested its storage with surveillance technology vendors such as Axis, Genetec, Milesone and Verint. Its video surveillance distributor and integrator partners include Avnet, Ingram Micro, and Scansource.

Gallant said EMC has been in the video surveillance market for eight years but the VNX-VSS100 is its first main storage platform built specifically for the market.

He said storage is the fastest growing part of the video surveillance market, and is expected to be around $3 billion in 2016.

“This is one of the most storage intensive application workloads,” Gallant said. “Governments are requiring longer retention of video, and the data collected is considered more valuable now. Organizations are putting a lot of edge storage devices to cover subway systems, railway stations and bus stations. There is a need for highly available high performance storage at the edge, and that data is being brought back to the core.”

Seagate today launched the Seagate Surveillance HDD, a hard –drive available in 1 TB to 4 TB capacities with 5 TB and 6 TD versions expected by the end of the year. The drive includes Seagate Rescue services, which the vendor said can typically restore data within two weeks with a more than 90 percent data recovery success rate. The drive is designed for large streaming workloads used in video surveillance, and has a one million hour mean time between failure MTBF rate to stay in the field longer.


September 29, 2014  11:44 AM

Oracle launches all-flash SAN, backup box for databases

Dave Raffo Dave Raffo Profile: Dave Raffo
Flash Array, Oracle, Storage

Larry Ellison took aim at EMC when he introduced a flash storage array and disk backup product at his Sunday keynote at Oracle Open World.

Never mind that the world is filled with flash SANs and disk backup appliances. Ellison proclaimed the FS1 Flash Storage System and Zero Data Loss Recovery Appliance the greatest in their class for Oracle applications. He singled out EMC’s XtremIo as the flash system he is looking to compete with. While he didn’t mention EMC’s Data Domain among disk targets, that is the clear market leader.

Ellison recently stepped down as Oracle CEO to become its executive chairman and CTO, but opened the annual show Sunday night with a rundown of new Oracle products and services.

“This is our first big SAN product,” Ellison said of the FS1, adding it can scale to 16 nodes and be used an all-flash or hybrid array mixing SSDs and hard disk drives.

As with all Oracle storage systems, the FS1 is designed specifically for Oracle applications. It scales to 912 TB of flash or 2.9 PB of combined SSD and HDD capacity with 30 drive enclosures.

The system also uses Oracle QoS Plus quality of service software to place data across four storage tiers. Customers can set application profiles for Oracle Database and other Oracle enterprise apps to set automated tiering.

FS1 systems come with a base controller or performance controller, and each system supports 30 drive enclosures. A base controller includes 64 GB of RAM cache or 16 GB NV-DIMM cache, and a performance controller has either 384 GB RAM or 32 GB NV-DIMM cache.

Drive enclosures support 400 GB performance SSDs, 1.6 TB capacity SSDs, 300 GB and 900 GB performance disk drives, and 4 TB capacity disk drives. A system supports any combination of these drives. A performance SSD enclosure includes either seven 400 GB or 12 400 GB drives, a capacity SSD enclosure holds 19, 13 or seven 1.6 TB drives, a performance HDD enclosure comes with 24 300 GB or 24 900 GB 10,000 rpm drives and an HDD capacity enclosure has 24 4 TB 7,200 rpm drives.

There are dozens of all-flash and hybrid flash systems on the market, but Ellison singled out XtremIO for comparison. “It’s much faster than XtremIO, and half the price,” Ellison said. He had a chart with IOPS and throughput numbers for FS-1 and XtremIO without giving the configurations that produced those numbers.

Ellison also unveiled the Oracle Zero Data Loss Recovery appliance, proclaiming “I named this myself, it’s a very catchy name.”

Ellison said the appliance is tightly integrated with the Oracle Database and the Recovery Manager (RMAN) backup tool to exceed performance of other backup appliances. Backup data blocks are validated as the recovery appliance receives them, and again as they are copied to tape or replicated. The appliance also periodically validates blocks on disk.

He said it is superior to other disk backup targets for Oracle databases. “Backup appliances don’t work well for databases because they think of databases as a bunch of files, and databases are not a bunch of files,” he said.

The recovery appliance also includes Delta Store software that validates the incoming changed data blocks, and then compresses, indexes and stores them. The appliance holds Virtual Full Database Backups, which are space-efficient pointers of full backups in point-in-time increments.

The recovery appliance uses source-side deduplication that Oracle calls Delta Push to identify changed blocks on production databases through RMAN block change tracking. That eliminates the need to read unchanged data.

A base configuration includes two compute servers and three storage servers connected internally through high-speed InfiniBand, and scales to 14 storage servers. A base configuration holds up to 37 TB of usable capacity (before dedupe) and a full rack has 224 TB of usable capacity. Oracle claims a rack can provide up to 2.2 PB of virtual full backups. That is enough to for a 10-day recover window for 224 TB virtual full backups and logs. Up to 18 full configured racks can be connected via InfiniBand.

Oracle claims a single rack appliance can ingest changed data at 12 TB per hour, and that performance increases incrementally as racks are added.


September 29, 2014  6:30 AM

Nasuni tests storage volume with more than 1 billion files

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Generating a billion files to prove a point is no trivial task.

Nasuni Corp. claimed to have spent 15 months creating and testing a single storage volume within its service with more than a billion files, including 27,000 snapshot versions of the file system. Nasuni Filers captured the data and did the necessary protocol transformation to send it to Amazon’s Simple Storage Service (S3) and store the files as objects.

The Natick, Massachusetts-based company used a test tool to generate the files directly on the Nasuni Filers, starting with a single entry-level system and eventually ramping up to a total of four filers to ingest the data faster. The two hardware and two virtual filers were located in Natick and nearby Marlborough, Massachusetts.

Nasuni CEO Andres Rodriguez said the files were representative of customer data, based on information from the company’s tracking statistics. He said there has been pressure on his company and competitors to demonstrate the scale of file systems, as customers increasingly want to deploy a single file system across multiple locations.

“We’re going after organizations that are running, say, Windows file servers in 50 locations and each of those Windows file servers may have 20 or 30 million files,” Rodriguez said. “They’re having problems with their backups or with their Windows file servers running out of room or running out of resources.”

Rodriguez said the UniFS global file system within Nasuni Filers at each site gives them access to their millions or billions of objects stored in Amazon S3 or Microsoft Azure. He said it doesn’t matter if the Nasuni Filer is a “tiny little box” or a “tiny little machine version.” “No matter how little the Nasuni Filer is, it can still see, access, read, write the one billion files,” said Rodriguez.

How big a deal is the billion-file proof point?

Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Oregon, viewed the Nasuni test as simply a “nice marketing assertion.”

“I commend them for running the test,” he said. “But, vendors such as EMC Isilon, Joyent, Panzura and other highly scalable scale-out file systems with global namespace can also provide access to all files from any node. A Nasuni filer is slower and primarily a gateway to objects stored in Amazon S3 or Microsoft Azure.”

Nasuni provided no performance information related to the billion-file demonstration. The company said only that data input/output performance varies based on the model of Nasuni Filer used. Higher end models support higher performance than entry level units, a company spokesman said.

Steve Duplessie, founder of Enterprise Strategy Group Inc. in Milford, Mass., said via an e-mail that Nasuni takes aim at second or tier-2 files, and performance is a “non-issue” with that class of data. He said Panzura is probably closest in approach to Nasuni but plays at a different level and has a heavy hardware footprint. He said Isilon can scale to a billion files but not globally. Isilon and Panzura cater to primary tier-1 data and carry the price tag to match, he said.

“If you were performance sensitive, you should use Isilon or NetApp,” said Duplessie. “Having said that, the overwhelming percentage of data in the organization is not performance sensitive, and the cloud is a fine place to keep it.”

Gene Ruth, a research director in enterprise storage at Gartner Inc., said he fields calls on a frequent basis from legal firms, construction companies, government agencies and other clients trying to provide common access to file environments from dozens, hundreds and in some cases thousands of branch offices.

“Nasuni is addressing the bulk of the market, which is support for universal access to files – being able to get at files on any device from anywhere. You have a common authoritative source that’s synchronized in the backend that provides those files,” said Ruth. “And they’re not the only ones that can do this.”

Ruth doesn’t view Nasuni’s billion-file announcement as significant, but he does see it as an indicator of the continuing evolution of what he calls cloud-integrated storage and what others often refer to as cloud gateways.

“Nasuni’s proven a point,” said Ruth, “that incrementally they’re getting bigger and more capable and more credible in addressing a bigger audience.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: