Few people argue that information is important. The value of information varies and changes over time but is the most critical resource for most organizations.
Yet, we see storage products where the importance of storing, accessing, and managing information is not addressed effectively or is seemingly trivialized.
There is complexity in managing information, especially as value changes. Requirements must be met when storing and managing information. It must be available, 100 percent valid, secure from an access standpoint, and protected from disasters, hardware failures and human errors.
People in IT often forget about these requirements, as do vendors. We see storage products that only emphasize moving data to execute a program against it, assuming there is no real issue regarding storage beyond that. They ignore that with the high value of information, the residency of information is with storage and is transient for servers and networks. The stewardship of information required for the processing and analysis is the responsibility of where the data is stored.
Another important consideration is that information is stored for a long time, typically for decades. The real concerns are about storing, managing, and administering the information over that period. The infrastructure will change over that time. Think of how many servers will be replaced over the information’s lifespan. This is also the area where major costs are incurred. The costs for storing and managing information over its lifespan can be far more significant than other technology costs.
Systems and solutions must make allowances for the cost of storing information for the lifespan of the data. When solutions do not address that concern, someone (the customer who understands the value of the information) must incur greater cost and effort to add those capabilities. To not do so adds an unacceptable measure of risk. The priority of information must be covered effectively when evaluating and making decisions about storage.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Commvault’s revenue keeps sliding while data protection software rival Veeam makes steady gains. Coincidence, or is Veeam taking deals that Commvault used to win?
Commvault’s revenue of $139 million was down nine percent from last year and eight percent from the previous quarter. Software revenue of $56.5 million fell 22 percent from last year and 19 percent from the previous quarter. Revenue from enterprise deals ($100,000 or more) dropped 29 percent from the previous quarter.
Commvault executives forecast revenue for the fiscal year, which ends in March 2016, to be roughly the same as last year — $608 million.
Veeam, which is a private company but discloses partial financial results, said its revenue bookings increased 22 percent last quarter over the previous year. Veeam claims its enterprise revenue (Veeam considers this deals of more than $50,000) increased 64 percent year-over-year.
Veeam said its total 2014 revenue bookings were $389 million and it has grown more than 20 percent in each of the first two quarters of 2014. That would bring its 2015 revenue close to $500 million if it keeps going at this pace.
“The headlines for the quarter are that we had a more challenging quarter than expected,” Commvault CEO Bob Hammer said on his company’s earnings call.
Commvault has actually had five consecutive challenging quarters as it rebuilds its sales organization and shifts its product strategy. Hammer said the company and the industry are in transition, but he doesn’t see Veeam as the cause of Commvault’s problems.
Hammer said Commvault, which handles virtual and physical backup and recovery, archiving and compliance and cloud data protection, goes far beyond Veeam’s technology. Veeam’s specialty is virtual data protection, but it has added replication and cloud capabilities in its move from SMB play to the mid-market and enterprise.
“As far as Veeam and the enterprise, Veeam does not have a platform,” Hammer said. “If you want to talk about enterprise, you can incrementally improve scale, but the scale we are talking about is nowhere near where Veeam’s is in terms of big enterprise scale.”
Commvault COO Al Bunte added: “It’s hard to do big enterprise scale and operational automation without a platform … And to my knowledge, the Veeam folks aren’t there yet.”
Doug Hazelman, Veeam’s VP of product strategy, sees it differently. He said Commvault has been the main competitor for Veeam as it moves into the enterprise.
“Commvault is definitely the one we’re competing with most as we go up market,” Hazelman said. “Look at the results we’ve had. We’re growing, and they’re not.”
Both companies have major upgrades coming.
Veeam is preparing to launch Veeam Availability Suite 9 with image-based VM replication to the cloud to enable disaster recovery as a service and greater storage array snapshot support.
Commvault also plans a 2015 launch of the next version of its Simpana platform, although it might not be called Simpana. Commvault executives never said the word Simpana on the earnings call Tuesday. They repeatedly referred to their “platform,” and emphasized the point products they have added in the past year in areas such as virtual data protection, cloud and end point protection. After all the changes Commvault is going through, it would not be surprising if they re-branded their platform.
Cisco finally figured out what to do with its Invicta all-flash array acquired from startup Whiptail for $415 million. It killed it off.
Cisco put out an end-of-life announcement last Friday for Invicta, and has stopped taking orders for the array. If you’re one of the few who bought an Invicta array, your final day to renew your service report is Oct. 19, 2019 with July 31, 2020 designated as the last day of support.
Cisco bought Whiptail in Sept. 2013, but the deal had problems from the start. With close storage partners such as EMC and NetApp pushing into flash storage at the same time, Cisco hesitated to declare the Whiptail arrays storage products. They were rebranded as Invicta, sold by the Cisco UCS server group, and omitted from the Vblocks sold through Cisco’s VCE alliance with EMC and the FlexPod reference architectures with partner NetApp.
However, quality issues prompted Cisco to take Invicta off the market last September with plans to fix the problems and bring it back out. That never happened, however, as Cisco confirmed last week that the product is finished.
Not only did Google offer new incentives with its Cloud Storage Nearline service to tempt customers to switch from Amazon and Microsoft Azure last week, the company also beefed up its ecosystem to make it easier to change cloud platforms.
Google added Actifio, CloudBerry Lab, Pixit Media and Unitrends as Google Cloud Platform partners. They join earlier partners Veritas/Symantec, NetApp, Iron Mountain and Geminare.
The partners have integrated Cloud Storage Nearline with disaster recovery, backup and archival and hybrid cloud solutions using Google’s open APIs. For example, copy data management vendor Actifio has mostly focused on selling into enterprise-level environments but now has set its sights on cloud platforms.
Actifio customers will be able to add a Vault profile that lets them move an application directly into Nearline.
Actifio CEO Ash Ashutosh said it makes sense to become a Google Cloud Platform Premier Partners program because “50 percent of our business comes from these cloud service providers.”
Cloud backup vendor CloudBerry allows its managed service provider (MSP) partners to integrate Nearline with all other Google Cloud Platform services using the same unified API.
Unitrends Free for Google Cloud platform will also support Nearline. Unitrends Free is free backup software that deploys as a virtual appliance in VMware vSphere and Microsoft Hyper-V, backing up data locally and connecting to the cloud.
Pixit Media, which sells storage for broadcast companies, has object plug-ins to Google Cloud Storage and Nearline.
EMC NetWorker and Avamar, and CommVault Simpana backup software also allow users to move data to Nearline, as does Egnyte’s file sharing application.
Google is trying to make an aggressive run at Amazon Web Services (AWS) and Microsoft Azure with its Nearline archiving service, plus new services such as the Cloud Storage Transfer Service and the Switch and Save progream. Switch and Save offers 100 PB of free storage in Nearline for up to six months for customers who switch from any other cloud provider or on-premise environments.
Google’s Nearline Storage is the answer to Amazon Glacier for cheap, cold storage. A new on-demand I/O service works with Storage Nearline to allow faster recovery for customers with large amounts of data.
EMC executives reduced their 2015 revenue forecasts for the second time this year following a quarter of tepid growth. They also said the vendor will implement plans to cut costs by $850 million a year and shift investment from traditional storage products such as its VMAX and VNX arrays to emerging technologies including flash and software-defined storage.
EMC CEO Joe Tucci also continued to defend keeping the EMC Federation intact instead of spinning off VMware or other significant pieces.
The forecast reduction and spending cuts came out during EMC’s quarterly earnings call. The vendor reported revenue of $6.1 billion last quarter, up three percent year over year. The storage business grew only one percent to $4 billion. As a result of those results, EMC now expects $25.2 billion in 2015 revenue, a $500 million downward adjustment over its previous guidance and $900 million from its original 2015 forecast.
“The results were mixed. We fell a bit short of revenue expectations,” Tucci said, noting profit of $487 million was a bit better than expected.
The plan to save $850 million annually in cost cuts will be in place by the end of 2016, with $50 million in cuts coming this year, according to CFO Zane Rowe. Tucci and Rowe said some of those savings will be shifted to growth products such as flash and ViPR, ScaleIO and Elastic Cloud Storage.
EMC emerging storage products, which include Isilon clustered NAS along with flash and software-defined storage, increased 49 percent year over year to $718 million. XtremIO grew more than 300 percent.
On the downside, VMAX revenue fell 13 percent to $892 million, and backup and data recovery dropped nine percent to $1.43 billion.
David Goulden, CEO of EMC Information Infrastructure (storage), said he expects traditional storage – VMAX and VNX – to grow two percent annually until 2018, and only about one percent this year.
“We believe the traditional storage market will not improve this year,” he said, adding that EMC will invest in flash, software-defined storage, big data and the cloud “to remain ahead of the market. We will rebalance resources to self-fund growth initiatives.”
Goulden said he expects a new Isilon release and the generally availability of the DSSD flash system in the second half of 2015 should help sales. Tucci said “I’ve never seen a product with as much demand for betas as DSSD.”
Tucci repeated that he is opposed to breaking up the EMC Federation, which includes VMware, Pivotal and RSA Security. Tucci said EMC II and VMware realize twice as much revenue from deals where both companies are involved than when each is in deals alone. He maintains that in the shift to convergence and cloud computing, the combination of companies makes EMC stronger.
“Splitting this federation or spinning off VMware is not a good idea,” he said. “One of the biggest transitions every company has to do is move to the cloud. Data centers are moving to cloud technology, both private and managed clouds. If you are doing that, would you rather do that as just VMware, as just EMC, as just Pivotal? Or are you much stronger doing it together?”
Tucci, who is also the EMC chairman, would not speculate on when EMC would name his successor as CEO. “I don’t want to comment on the timing,” said Tucci, who has postponed his retirement several times. “I am committed to giving the board the time they need to make sure the succession process works terrifically. I don’t want to put a deadline on the board, but they are actively engaged [in the succession process].”
SimpliVity claims its second quarter sales bookings increased 250 percent over last year, fueled by its Cisco partnership and an avalanche of interest in hyper-convergence.
SimpliVity does not give as much information on its financials as rival Nutanix, but is widely believed to be No. 2 behind Nutanix in hyper-convergence market share. The two private companies were pioneers of hyper-convergence, which combines storage, servers, virtualization and networking in one box.
SimpliVity began selling its software and the ASIC than handled data deduplication along with Cisco UCS servers in early 2015. It also continues to sell its OmniCube bundled appliances.
SimpliVity CEO Doron Kempel said sales with Cisco increased four times from the first quarter to the second, although he said Cisco still accounts for less than 20 percent of SimpliVity sales. He said the company has more than 550 customers and 2,000 units in the field.
Kempel said besides the bump from Cisco, SimpliVity also benefitted last quarter from hype around hyper-convergence. That hype is also attracting more competition, though.
“The market is starting to mature. There is much more noise about hyper-convergence,” Kempel said. “Now large vendors are trying to convolute what it means. A lot of analysts are completely confused. They confuse hyper-convergence with convergence.”
Many vendors are pushing converged systems – packaging discrete servers, storage and hypervisors – but they are also getting into hyper-convergence. VMware is among them with its Virtual SAN (VSAN) software, which it makes available to hardware partners through its EVO: RAIL program. Dell, Hewlett-Packard, EMC and Hitachi Data Systems are among large storage vendors who sell EVO:RAIL systems. Dell also sells Nutanix software on its PowerEdge servers through an OEM deal.
Kempel said most of SimpliVity’s deals are still against legacy storage, with about 20 percent coming head-to-head against Nutanix.
He said SimpliVity rarely sees VMware or EVO: RAIL appliances in the field. “The customers we talk to expect hyper-convergence to run tier one applications across sites,” he said. “EVO:RAIL and Nutanix are focused on single sites and VDI.”
Kempel will be talking to a lot more customers personally. Despite the increase in sales, he said he has replaced Mitch Breen as senior vice president of global sales.
“After 19 months with us, he completed his mission and I have assumed the role of global head of sales,” Kempel said. “We are intensifying our go-to-market activities.”
SimpliVity also added Jose Almandoz as SVP of operations and Randi Nichols as VP of human resources to help manage growth that Kempel said amounts to 10 new employees per week. He said he plans to grow from its current 550 employees to around 800 employees.
SimpliVity closed a $175 million funding round in March.
Data protection software vendor SIOS Technologies is branching out into IT analytics.
The vendor this week launched SIOS iQ, an analytics platform developed for virtual machines and their infrastructure that collects data and runs algorithms to identify patterns and possible problems. It can be used to troubleshoot or project the effect of changes to the technology.
The initial release works only with VMware hypervisors, but SIOS COO Jerry Melnick said the platform is designed to work with any virtual environment and he expects it to be expanded in future releases. The application tracks performance, efficiency, reliability and capacity metrics in an infrastructure, and alerts customers if it detects a potential problem or a way to make improvements.
Melnick describes iQ Vision as “a simple way to get answers to difficult questions in a complex environment. A lot of customers have moved into these [virtual] spaces with good intentions, but they’re dynamic environments and they keep getting bigger and bigger.”
Customers download iQ, install it and it works without needing any configuration, Melnick said.
The software isn’t specific to storage, but does look at the storage as well as hosts, VMs, applications and networks.
“Storage is probably the most interesting space,” Melnick said. “There are more issues in that space than the others.”
SIOS iQ’s host-based caching analytics can help improve storage performance. It analyzes blocks written to disk and identifies the read ratio and load profile to identify the VMs and disk that will benefit from caching. The application uses that information to make configuration recommendations on how much cache to add and what size the cache blocks should be, and predicts the performance impact from implementing the recommendations.
SIOS iQ also identifies under-used VMs and unnecessary snapshots that can be eliminated to prevent snapshot sprawl.
Other features include performance root cause analysis and advanced analytics for Microsoft SQL Server.
Unlike newer array management software, iQ is not cloud-based but SIOS plans to automatically deliver product upgrades every four to six weeks in what Melnick calls a SaaS (storage as a service) delivery style. SIOS iQ is sold as an annual subscription, with a list price of $150 per host per month.
Hewlett-Packard has added two software applications — one new and one upgraded for the tasks — to help manage unstructured data.
The new HP Storage Optimizer solution combines file analytics and policy-based storage tiering, while HP ControlPoint helps organizations prioritize which data is migrated to on-premise storage, the cloud, Hadoop or a virtual repository. The idea is to examine the contents for governance and risk assessment.
Storage Optimizer uses file analysis technology from the HP ControlPoint portfolio and works with HP Data Protector technology to handle file analysis and storage management of unstructured data across platforms, including Hadoop, SharePoint, Microsoft Exchange and HP StoreAll unstructured data storage platform. The technology analyzes metadata to determine which information should be offloaded from tier one storage to tier two as a way to manage costs. The goal is to reduce the storage footprint and improve management of data that falls under compliance mandates. Storage Optimizer uses data deduplication across repositories to reduce redundant data.
HP ControlPoint, which launched two years ago, has been updated to work better with Optimizer and is being positioned for new use cases for file analysis and data migration. It is integrated with the HP Helion cloud via a built-in connector, and migrates only the most relevant data to the cloud instead.
“ControlPoint organizes data into categories and groups and based on that makes decisions on that content,” said David Gould, HP’s global director of information governance. “For instance, it is able to recognize data that is a contract so all the contract-based storage is designed through policy to go to certain storage. It allows you to identify content and take action on that content.”
A major problem that IT professionals have dealt with over time is the creation of islands of storage. A common cause of islands is when organizations purchase and deploy new systems with their own storage for a specific purpose.
Storage islands create problems in these areas for administrators:
• Data protection. This requirement is usually assigned to a single group to insure its completion, manage recovery of information, and make sure business practices are followed. When storage spreads to islands, these tasks become more complex.
• Security. Islands of storage increase the effort required to address security for data-at-rest.
• Inflexible capacity. Islands prevent capacity from being applied to where there is immediate demand.
• Performance. Meeting changing performance demands to storage can be difficult and expensive. Individual islands of storage would have to be addressed as individual cases each time there is a performance challenge.
• Cost. The overall costs for managing storage islands can be significant and greater than expected when the islands were created.
The IT answer for islands of storage has been a consolidation to centralized storage, either through storage virtualization or large systems with advanced capabilities for performance, protection and security. Performance is addressed by the larger systems with the ability to manage quality of service and introduce solid-state storage. The economics for consolidation have been proven over time compared to isolated storage.
There are new ideas to “make storage easy” that have become popular but are creating even more islands of storage. Hyper-converged, converged systems and virtual SANs as implemented by many organizations create islands. It is difficult, if not impossible, to consolidate these systems. These types of systems result in faster deployment and greater simplicity, but they complicate data management. They are often deployed in a manner in which the IT professionals are unavailable to fulfill the requirements to meet business demands.
The challenges represent opportunities, however, for companies to create new solutions to solve or at least lessen the problems. It is unlikely that the requirements around protection, security, capacity demands, performance, and overall management cost will go away or be redefined out of existence. These will require new solutions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
One petabyte-scale storage customer traveled to this week’s Red Hat Summit in Boston with dreams of Ceph and Gluster merging into a single product. He hoped Red Hat would take the best pieces of each and “slam them together.” He suggested the vendor could dub the new creation Red Hat Storage, Red Hat Software-Defined Storage or Red Hat Scalable Storage, or “come up with a fun new name.”
“I was waiting for it, and it just didn’t happen,” said Nicholas Gerasimatos, director of engineering at Fair Isaac Corp. (FICO).
And it isn’t likely to happen soon, if ever, according to the heads of product management for Red Hat’s commercially supported versions of open source Ceph and Gluster software.
“They’re very different, and the way we see Ceph and Gluster is that they’re targeted at different parts of the market,” said Neil Levine, the Red Hat director of product management who laid out Ceph’s long-term roadmap during a conference session this week. “Ceph is Fortune 500, ‘I’m building a huge Amazon-style cloud.’ Gluster is not mid-market, but it’s certainly for customers that have a problem ‘that I need to fix and I don’t have months to set this up; I’ve got like a week.’ ”
Levine said the reason the conversation crops up about combining Ceph and Gluster is “mainly because customers want a file system, which Gluster provides, but then they like the distributed smarts underneath Ceph.”
“It’s something that I don’t think we’re likely to do,” Levine said. “Customers can do it, but it’s not a supported configuration that we’re going to recommend or push. I think if you want our file system, you should use Gluster, and trying to put Ceph underneath it, you’re just giving yourself an operational headache and potentially expense if you’re going to buy those products from us as well.”
Separate communities develop the open source Ceph and GlusterFS projects. Ceph’s community is working on a file system, but CephFS is generally regarded as not ready for enterprise prime time yet. In the meantime, Ceph sees use for block and object storage. Gluster offers file and object capabilities.
“Instead of trying to combine the two products, we will come up with a control plane that makes these two products look consistent,” said Sayan Saha, head of product management for Red Hat Gluster Storage. “Our eventual goal is actually to get rid of the whole concept of Ceph and Gluster.”
Saha said the new control plane could provision, manage, monitor and tune Gluster and Ceph in the same way, “where all you care about is data services for your workloads as opposed to caring about where it is coming from.” Red Hat demonstrated the unified storage management technology at its booth in the conference’s exhibit hall. The company wrote the controller software in the last six or seven months, according to Saha.
“You want virtual block storage. You want file storage. Or, you want object storage. You will be able to come to that controller and request that, and it will be served out to you,” Saha said. “If you choose that you want to do block storage, it will give you Ceph, and then you go to the Ceph provisioning. If you say file, it will say Gluster.”
Red Hat currently recommends Ceph or Gluster based on the workloads the customers intend to run, but there can be overlap between the two products. For instance, Gerasimatos said when Red Hat visited FICO, one group told them to use Gluster, and the other group said Ceph. FICO engineers ultimately chose Ceph and decided to run the free, open source version of Gluster on top of Red Hat Ceph in cases where they needed a file system.
Craig Hadix, a data center architect for a global systems integrator, said Red Hat’s two-product storage strategy can be confusing. He said he would be surprised if Gluster is still around in a year. He thinks it would be smart to “take the feature set that Gluster provides and integrate it into one storage product that has Ceph and Gluster features.”
But, combining the source code from two distinct software applications can be a bear of a project. Just ask NetApp. The company spent years trying to merge the scale-out NAS software from its Spinnaker Networks acquisition with its Data OnTap operating system.
“NetApp does aggregation for eight years,” Saha said. “There’s no product.”