At least one Symantec backup product will no longer be in the lineup by time the vendor splits apart its security and backup businesses in a little more than a year from now.
While many in the storage world were discussing the new information management company that would come from the Symantec split, Symantec last week disclosed plans to stop selling Backup Exec on an integrated appliance.
As of Jan. 5, Symantec will discontinue the Backup Exec 3600. It will sell Backup Exec the old-fashioned way – it will provide the software and let other vendors provide the hardware.
While integrated appliances for Symantec’s enterprise NetBackup software have been successful– it recently expanded the NetBackup appliance line – that has not been the case with the SMB-focused Backup Exec.
In a blog on the Symantec website announcing the move, senior director of global product marketing Drew Meyer wrote:
”Providing our partners with Backup Exec software that they can bundle with hardware and services best meets the needs of our small and mid-sized business customers looking for a combined offering.”
Meyer cited Fujitsu, which sells an Eternus BE50 appliance with Backup Exec in Japan and Europe. He also wrote the recent release of Backup Exec 2014 shows that Symantec is committed to the software, which ran into problems when the 2012 version came out.
Symantec’s new information management company will offer maintenance renewals for the Backup Exec 3600 through January of 2018 and support will continue until January of 2020.
Competitors are more than happy to relieve Backup Exec customers of their appliances. Zetta.net and Unitrends this week came forward with programs to tempt Backup Exec customers to switch.
Zetta said Backup Exec customers can sign up for Zetta’s cloud backup and DR service free for six months, and it will give up to 20 percent discounts on annual contracts. This is similar to a migration program Zetta ran for BackupExec.cloud customers after Symantec shut down that service earlier this year.
Unitrends said Backup Exec 3600 customers can trade their appliances for one if its integrated appliances for only the cost of support. The Unitrends Recovery-713, Recovery-813 and Recovery-822 are the available models. Backup Exec customers must sign three-year or five-year support contracts for their free appliances.
Object storage vendor Scality has scored a reseller deal with Hewlett-Packard, which the private company’s CEO said will greatly expand its global reach.
Scality and HP have worked together closely in the field, and a lot of Scality’s Ring software runs on HP Proliant servers.
“We’ve been working with all the server vendors since the beginning,” Scality CEO Jerome Lecat said. “HP has been the most proactive in coming up with a server that fits our industry.”
HP sells Scality software on the ProLiant SL4540 and DL360p Gen 8 servers.
Lecat said Scality has more than 40 PB of customer data deployed on HP servers. Scality-HP customers include DailyMotion, TimeWarner Cable and European television station RTL2, he said.
Lecat said the deal is crucial for Scality because “we’re still a relatively small company, and we do not have thousands of sales people around the globe like HP does.”
The deal is not exclusive. HP sells its own StoreAll product with object storage, and it also works closely with Cleversafe. There is no formal reseller deal with Cleversafe, but it is featured alongside Scality on HP’s object storage software for ProLiant web page.
Lecat said Cleversfe’s dsNet object storage is more suited for long-term archives while Sclaity Ring is for active applications such as email and video archiving.
“We don’t see ourselves as an object storage company,” Lecat said. “Object storage companies only focus on archiving. Our ambitions are larger than that. We have a lot of media companies running video on demand, consumer web mail and other applications. We’re not just deep and cheap archiving.”
Druva is taking its enterprise endpoint backup software and moving it into backup for small businesses and remote and branch office backup.
The company this week launched Druva Phoenix, a centralized management backup and archive product targeting companies that have tight budgets, limited local IT staff or none at all. The software is based on Druva’s nSync enterprise endpoint backup and nCube architecture. Phoenix is an agent-based software with global deduplication that is done at the source level.
Druva Phoenix is offered an alternative to traditional server backup that requires secondary storage, tape and archiving.
“This is a pure play software as a service cloud product,” said Jaspreet Singh, Druva’s CEO and founder. “The core to solving backup to the cloud is building a scalable deduplication in the cloud. In the last five and a half years, we built endpoint backup for the cloud. In the last 18 months, we were looking for what we can solve next. The remote office looked interesting.
“We thought we could remove a few processes by introducing Phoenix,” he said. “We are extending from endpoint to remote offices. It’s a very natural extension for us.”
Phoenix has a software-based cache accelerator for backup and restores, which resides on the server in the remote or branch office. The rest of the data is moved into the Amazon cloud.
“Because there is not much metadata, it can scale fairly well,” Singh said.
Singh said without deduplication, the amount of data stored in the cloud becomes exorbitant. For instance, 1 TB of data can multiple to 719 TB of data after it is retained for seven years if dailies, incrementals and full backups are done.
“One data reduction price-point is based on the source data,” Singh said.
Jason Buffington, senior analyst at Enterprise Strategy Group, said ROBO servers are the next “battleground” for cloud-based backup where it makes sense. For the remote office, he said the decision to back up to the cloud depends on whether IT wants to control ROBO backups or just manage the data repositories.
Druva’s endpoint software lends itself to small business and ROBO backup and archiving because the software was designed with administrative over-site capabilities, Buffington said. The software also comes with a three-year, seven-year and infinite retention policy.
“No one would keep endpoint data for an infinite amount of time,” Buffington said. “But it should be a requirement for server-based protection.”
The term access method is frequently used to identify types of I/O in open systems. Many who use it probably don’t understand the historical context for what has been known as an access method for over 50 years. In open systems, the types of I/O are for block data, file data, and object data. Access methods represent how the types of data are stored on devices.
The term access method comes from the mainframe world and denotes a number of well known (at least to those who have worked with mainframes) means to store or access information. Access methods are really software routines accessed by application programs using software commands that are inline calls to system functions. You can call these Application Program Interfaces (APIs). The closest equivalent function in open systems would be a device driver.
There are many types of access methods and most deal with how data is organized, usually in the form of records, which are typically fixed length blocks of data in a dataset.
Some the familiar access methods for storage in the mainframe world include:
- BSAM – Basic Sequential Access Method
- QSAM – Queued Sequential Access Method
- BDAM – Basic Direct Access Method
- BPAM – Basic Partitioned Access Method
- ISAM – Index Sequential Access Method
- VSAM – Virtual Storage Access Method
- OAM – Object Access Method
An example of doing I/O in an application in QSAM would be to set up buffers in memory for queued I/O (multiple records in a block) and then do a GET or PUT. Interestingly, the basic I/O for S3 object access is GET and PUT.
Open systems access methods are termed:
- Block – individual blocks of data are read or written from/to storage
- File – a stream of bytes that represent a file with associated file metadata is written or read within the organization of a hierarchical tree structure.
- Object – data segments and user or system-defined metadata is stored in a flat namespace with access through object ID resolution.
The open systems access methods don’t map directly to those in the mainframe world, but you can understand them if you know the mainframe methods. The term access method in open systems isn’t wrong, it just means a slightly different thing. Translating between the two will help understand the meaning.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Symantec today confirmed it is splitting off its information management business from the security business. The security company will keep the Symantec name, while the Information Management company has no name yet. The split is scheduled to complete by the end of 2015.
The Information Management arm will be a storage vendor, with products in backup and recovery, archiving, eDiscovery, storage management, and information availability solutions. John Gannon, who retired as Quantum COO in 2005 and also led HP’s personal computing division, becomes general manager of the new storage company.
Michael Brown, named Symantec’s permanent CEO last month, will continue to run Symantec.
“We’re confident this is the right thing from a strategy standpoint,” Brown said.
Brown said Symantec’s leadership team decided it was too difficult to remain a market leader in security and data management, and that led to the breakup decision. The security and storage companies came together in 2005 when Symantec acquired Veritas for $13.5 billion, but there have been intermittent rumors that the backup business would be spun off or sold for years.
The security part of the business has been the bigger piece of Symantec, with $4.2 billion of revenue in fiscal year 2014 compared to $2.5 billion for information management.
So, does anyone think they should call the new information management company Veritas?
EMC has issued two responses to the letter that investor Elliott Management made public Wednesday calling for the vendor to spin off VMware and/or explore a merger with other large companies.
EMC first released a direct response to the Elliott letter, saying little except to repeat claims that EMC is exploring options but believes its strategy is sound.
An indirect response did a better job of making EMC’s case for keeping its federation of EMC, VMware, RSA, and Pivotal together. That response came today in the form of a release touting its Federation Software-Defined Data Center Solution.
The solution is little more than a combination of products from EMC’s companies with extras such as a self-service portal and scripts to tie them together. But the concept shows how the parts of the EMC Federation work together, testing the products at the federation’s engineering lab on the VMware campus, and putting pieces together to solve distinct data center problems.
Is it a coincidence that the data center solution release came one day after Elliott’s letter to CEO Joe Tucci and the EMC board questioning the value of EMC keeping everything under one umbrella? Bharat Badrinath. EMC’s Senior Director of Global Solutions Marketing, isn’t saying.
“That’s something Joe and the board will determine,” he said of the spinout and merger issue.
Badrinath’s job is pushing products, not mergers. EMC’s solution announcement also provided this list of EMC Federation products brought together as part of the software-defined data center solution:
- Management and Orchestration: VMware vCloud Automation Center, VMware vCenter Operations Management, VMware IT Business Management, EMC Storage Resource Manager
- Hypervisor: VMware vSphere, the industry’s most widely deployed virtualization platform
- Networking: VMware NSX, the network virtualization and security platform for the software-defined data center. VMware NSX brings virtualization to existing networks and transforms network operations and economics
- Storage: Designed for EMC ViPR & EMC Storage, EMC Storage Resource Manager, VMware Virtual SAN.
- Hybrid Cloud Deployment Models: Connectivity to VMware vCloud Air
- Choice of Hardware: Built on converged infrastructure and can be deployed on a variety of hardware including VCE Vblock and VSPEX.
- PaaS: Delivering Platform-as-a-Service with Pivotal CF
- Documented Reference Architectures
The point EMC wants to make is these products from different parts of the federation are intertwined and cannot be broken apart without harm.
“We have four strategically aligned companies which are working together at times, but there are also times when they are independent and operate on their own,” Badrinath said. “Customers can pick products developed independently or together. It’s all about us being better together or bringing the best of the best within the four businesses.”
Other solutions that will follow include Platform-as-a-Service, End-User Computing, Virtualized Data Lake and Security Analytics. Badrinath said they all should be available by early 2015.
Badrinath said the testing for the software-defined data center portion of the program took 40,000-person hours of engineering across federation companies. He also emphasized that EMC and VMware continue to work with outside partners, even if those partners such as Microsoft or other storage vendors compete with federation companies at times.
While the federation’s software-defined data center initiative has been going on for months, the release sounds as if it were put together to counter specific complaints from Elliott. The letter, signed by Elliott portfolio manager Jesse Cohn, said the EMC storage company and VMware “hinder one another” because they compete in areas, and the relationship prevents them from developing other critical relationships. Cohen said EMC’s stock is underperforming, the company is undervalued, and EMC and VMware would both be better off apart.
“As time passes, this untenable situation is going to get worse,” he wrote to EMC.
While launching the latest version of Red Hat Storage Server yesterday, the vendor provided little insight into the long-term positioning of its storage software portfolio and the chances that it might combine its Gluster-based Storage Server and Inktank Ceph Enterprise product lines.
Ranga Rangachari, vice president and general manager of storage and big data at Red Hat, said the company hopes to “get back to our customers and partners in the very near future with a consolidated vision of where this journey is going.” He addressed the topic in response to a question during the company’s Webcast entitled “Advancing software-defined storage,” which he said customers view as the ability to take advantage of industry-standard x86 servers with the intelligence resting in the software.
Rangachari noted simply that Red Hat’s acquisition of Inktank Storage Inc. this year brought object- and block-based storage to the table and complemented the file system capabilities the company gained through its 2011 acquisition of Gluster Inc.
Gluster had sold a supported version of the open source GlusterFS distributed file system in much the same way that Inktank sold a supported version of open source Ceph. Any innovative software development work rests with their respective open source project communities.
“The Gluster and the Ceph communities continue to thrive independently and thrive really well,” said Rangachari, claiming that Gluster and Ceph combined for almost two million downloads during the last nine months. “The innovation that’s going on on both those projects will continue to happen unabated.”
Red Hat put out new versions of each of the commercially supported products this year. Storage Server 3, launched yesterday, is based on open source GlusterFS 3.6 and adds support for snapshots, multi-petabyte scale-out capacity, flash drives and Hadoop-based data analytics. Inktank Ceph Enterprise 1.2, released in July, was based on open source Ceph’s Firefly release and added erasure coding, cache tiering and updated tools to manage and monitor the distributed object storage cluster.
The Ceph open source project claims to be a unified system providing object, block and file system storage. Ceph’s file system runs on top of the same object storage system that provides object storage and block device interfaces, according to the project’s Web site.
“It’s fair to say that file is probably the least well evolved of those three,” said Simon Robinson, a research vice president in storage at New York-based 451 Research LLC. “The file capability is very immature. It’s not enterprise-grade.”
But, as the Ceph technology improves, Red Hat will need to confront the question of whether to continue to focus on Gluster and Ceph, said Robinson.
“I think Red Hat’s bet buying Gluster was, ‘Hey, look at all this unstructured data. Look how quickly it’s growing. We need a play here.’ Three years ago, that play was NAS. Today it looks slightly different,” said Robinson. “When we think about the growth of unstructured data, it’s actually object that is seen as the future architecture rather than NAS.”
He cited Amazon and Microsoft Azure as proof points of the object model working at scale. “It’s just a case of how does that percolate down into the enterprise. It will take time,” he said.
Robinson said he doesn’t think it makes sense for Red Hat to physically merge Gluster and Ceph. He predicted that if Red Hat Storage does catch on, its success will be through Ceph – “the darling of the storage startup world” – tied to the broader success of the open source OpenStack cloud technology platform. Ceph has already started to gain momentum among cloud service providers, he said.
“Everybody’s playing with OpenStack, and if you’re playing with OpenStack, you’ve probably heard of Ceph. And Ceph has the interest of the broader storage community,” said Robinson. “Other big players are really interested in making Ceph a success. That works for Red Hat’s advantage.”
Henry Baltazar, a senior analyst at Cambridge, Massachusetts-based Forrester Research Inc., said he sees no problem with Red Hat having Gluster-based file and Ceph-based block and object storage options at this point, since the company doesn’t have much market share.
“They’re going to have two platforms in the foreseeable future. Those aren’t going to merge,” predicted Baltazar. “Gluster is definitely the file storage type. There are ways they could use it that can complement Ceph. It still remains to be seen where it will wind up 10 years from now.”
EMC is aiming its new RecoverPoint for Virtual Machines at cloud DR, in partnership with cloud security vendor CloudLink Technologies.
RecoverPoint for Virtual Machines is a hypervisor-based version of EMC’s RecoverPoint replication software. It will be generally available Nov. 17. EMC will also integrate the product with CloudLinksSecureVSA, which provides encryption for data at rest and data in motion.
The combined products can allow service providers to build DR as a Service (DRaaS), and enterprises can use them to replicate data to private or public clouds for DR.
RecoverPoint for Virtual Machines is a software-only product. Unlike previous versions of RecoverPoint, it is storage-agnostic so it doesn’t require EMC arrays to run. It works with any VWware certified storage. It is not hypervisor-agnostic yet, though. It supports VMware vSphere today with support for Microsoft Hyper-V and KVM hypervisors on the roadmap.
It is EMC’s first replication software that works on the individual VM level. Instead of replicating storage LUNs as other RecoverPoint versions do, RecoverPoint for Virtual Machines splits and replicates writes for VMware vSphere VMs. It requires splitter code on each NSX node running protected VMs, and at least one virtual appliance at each site. Customers can replicate VMs regardless of hardware running at either end.
CloudLinksSecure VSA adds security. It allows customers to store and manage encryption keys on-premise.
“One of the big inhibitors of going to a public cloud is security,” said Jean Banko, director of product marketing for EMC’s data protection division. “That’s why we partnered with CloudLink.”
Oracle wasn’t the only vendor to toot its own storage horn at Oracle OpenWorld this week. A couple of smaller vendors played up their budding relationship with the database giant to show how they are growing their number of enterprise customers.
Array vendor Nimble Storage added a pre-validated SmartStack reference architecture for Oracle’s JD Edwards EnterpriseOne application and copy data management pioneer Actifio increased its integration with Oracle apps to attract more database administrator customers.
Nimble first introduced its SmartStack reference architectures in late 2012 through a partnership with Cisco, and claims more than 200 SmartStack customers. The JD Edwards version consists of Nimble’s CS300 storage array, Cisco UCS Mini Server, Oracle VM, Oracle Linux, Oracle Database, and JD Edwards EnterpriseOne 9.1.
Nimble’s previous SmartStack flavors include VDI for Citrix and VMware, Microsoft Critical Applications, Oracle Critical Applications, Desktop/Server Virtualization, Server Virtualization/Private Cloud and Data Protection with CommVault.
Radhika Krishnan, Nimble VP of product marketing and alliances, said the JD Edwards SmartStack came about because more enterprises are starting to deploy Nimble storage, and JD Edwards can be a tricky app to size correctly.
“Sizing tends to be challenging, depending on the number of end users you have,” she said.
Actifio used the conference to show off its expanded support for Oracle applications, allowing it to become an Oracle Platinum Level partner. The expanded support for Actifio CDS includes Oracle Database 12c, RMAN and Exadata products.
The value of Actifio’s software is it allows organizations to uses one copy of data for production, backup, test/development or any other application.
Actifio reps said they modified their platform’s workflows to enable DBAs to automate the data flow, with the help of RESTful APIs.
Oracle DBAs can use the automated workflow to provide copies of their databases to developers in minutes, according to Actifio senior director of global marketing Andre Gilman.
“I call it RMAN on steroids,” he said. “The old school way can take days to weeks, and even with newer technologies it takes hours.
“You can create a live clone of a database and update a virtual image to all your developers at the same time. You don’t have to repeat the process as part of daily maintenance. You put it in as one workflow and it’s all automated.”
Actifio director of product marketing Chris Carrier said the integration came from months of co-development work with Oracle. He said Actifio uses RMAN to do its change-block tracking and built its LogSmart log management system specifically around Oracle, although it works with other databases. “If you show Oracle DBAs a different way to manage data, they get nervous. But if you’re leveraging RMAN, they like that,” Carrier said.
EMC Corp. made available the 3.0 release of its XtremIO all-flash array today with new inline compression capabilities and performance improvements – but existing customers who want the software upgrade need to prepare for significant disruption to their production environments.
Josh Goldstein, vice president of marketing and product management for EMC’s XtremIO business unit, confirmed that users will need to move all of their data off of their XtremIO arrays to do the upgrade and then move it back onto the system once the work is complete.
EMC’s XtremIO division has taken some heat on the disruptive – and some say “destructive” – nature of the upgrade, especially in view of the company’s prior claims that the product supported non-disruptive upgrades (NDU) of software.
Goldstein said the company decided to make an exception for the 3.0 upgrade based on customer input about the inline compression capabilities, which he claimed could double the usable capacity of an XtremIO array in many cases.
“This was a choice that was made, and it was not an easy choice,” said Goldstein. “We could have delayed the feature. Originally we were planning to put this in later in the roadmap. If we had chosen to, we could have tied this to another hardware release, and it would have been something that existing customers could never take advantage of. Our customer base told us emphatically that that was not what they wanted.”
Goldstein said that EMC will provide, at no cost to customers, the option of professional services and extra XtremIO “swing” capacity, if necessary, to ensure that they have an equivalent system on which to put their data while the upgrade is taking place.
The disruptive nature of the 3.0 upgrade came to light recently through an “XtremIO Gotcha” blog post from Andrew Dauncey, a leader of the Melbourne, Australia, VMware user group (VMUG). Dauncey wrote: “As a customer with limited funds, this is the only array for a VDI project, where the business runs 24/7, so to have to wipe the array has massive impacts.” He said a systems integrator had offered a loan device to help with the upgrade.
Dauncey worked as a systems engineer at a public hospital in Australia at the time of his initial post on Sept. 14. He has since gone to work for IBM as a virtualization specialist.
In a blog post last Sunday, Dauncey noted EMC’s “marketing collateral” that advertised “non-disruptive software and firmware upgrades to ensure 7×24 continuous operations,” and he accused EMC of “false advertising” prior to the release of the updated 3.0 firmware.
Goldstein said, “The releases that we’ve had from the time the product went GA up until now were all NDU. The releases that we have going forward after this point will all be NDU as well.”
Chris Evans, an IT consultant who writes the blog “Architecting IT,” said via an e-mail that HP upgraded its platform to cater to flash, and SolidFire released a new on-disk structure in the operating system for its all-flash array without disruptive upgrades. He said, what’s surprising in the XtremIO case is “that EMC didn’t foresee the volume of memory to store metadata only 12 months after their first release of code.”
Chad Sakac, a senior vice president of global systems engineering at EMC, shed some light on the technical underpinnings of the upgrade through his “Virtual Geek” personal blog, which he said has no affiliation to the company. He said the 2.4 to 3.0 upgrade touches both the layout structure and metadata indirection layer, and as a result, is disruptive to the host. He pointed to what he said were similar examples from “the vendor ecosystem.”
Goldstein confirmed that the block size is changing from 4KB to 8 KB, but he said the block-size change is not the main reason for the disruptive upgrade. He said it’s “all these things taken together” that the company is doing to both add compression and improve performance.
“We already had inline deduplication in the array, and that means that you have to have metadata structures that can describe how to take unique blocks and reconstitute them into the information that the customers originally stored,” Goldstein said. “When you add inline compression, you have to have similar metadata information about how to reconstitute compressed blocks into what the customer originally stored. Those kinds of changes are things that change the data structures in the array, and that’s what we had to update.”
Goldstein said customers should not have to endure an outage in the “vast majority of cases.” Goldstein claimed that XtremIO is “overwhelmingly used” in virtual environments and moving virtualized workloads is not difficult. He mentioned VMware’s Storage VMotion and EMC’s PowerPath Migration Enabler as two of the main options to help, but he said there are others.
Customers also may choose to remain on the 2.4 code that EMC released in May. Goldstein said that EMC will continue to provide bug fixes on the prior 2.4 release for “quite a long time.”
“There’s nothing forcing them to upgrade,” he said.
Craig Englund, principal architect at Boston Scientific Corp., said EMC contacted Boston Scientific’s management about the disruptive upgrade in the spring. At the time, the IT team already had a loaner array for test purposes, and they asked to keep it longer after learning the 3.0 upgrade was destructive.
“It reformats the array. It’s destructive. It’s not just disruptive,” Englund said. “You have to move all of your data off the array for them to perform the upgrade.”
But, Englund said the team can “move things around storage-wise non-disruptively” because the environment is highly virtualized through VMware. He said he’s willing to go through the inconvenience to gain the ability to run more virtual desktops and SQL Server databases on the company’s existing XtremIO hardware. Early tests have shown a 1.9 to 1 capacity improvement for the database workloads and 1.3 to 1 for VDI, he said.
“They could have said, ‘If you want these new features, it’s coming out in the next hardware platform, and you’ll have to buy another frame to get it.’ But, they didn’t, and I think that’s great,” Englund said. “To try to get this out to all of the existing customers before they get too many workloads on them, I think, was considerate.”