A DataCore IT survey on the state of software-defined storage, hyper-converged systems and cloud storage showed a gradual uptake of some of the most heavily promoted new technologies.
For instance, flash use is growing, yet it will represent only a small percentage of overall storage capacity in 2017, according to the DataCore IT survey of 426 customers and prospective customers conducted from late 2016 through April 2017 via Survey Monkey.
The majority (76%) of the surveyed IT professionals indicated flash would represent less than 20% of their storage capacity in 2017. Among that group, 14% weren’t using flash at all, and 32% projected less than 10% of their storage capacity would be flash-based.
However, the IT survey respondents listed all-flash arrays as their top preference to overcome performance problems, followed by software acceleration on the host machine and switching to in-memory databases.
Flash also factored into the responses to the question: “What technology disappointments or false starts have you encountered in your storage infrastructure?” Flash failed to accelerate applications for 16% of the IT survey respondents.
The most cited disappointments were “cloud storage failed to reduce costs,” selected by 31%, and “managing object storage is difficult,” mentioned by 29%.
Confusion over hyper-converged
The DataCore survey indicated there’s confusion over what the term “hyper-converged” means. Close to half (41%) of survey respondents think hyper-converged means software is tightly integrated with the hypervisor but hardware agnostic. Another 27% view hyper-converged as an “integrated appliance” with “hardware and software locked together.”
The majority of the IT survey respondents (67%) have not deployed hyper-converged infrastructure (HCI), although 34% said they are strongly considering it. The primary reason for ruling out hyper-converged was lack of flexibility, followed by expense and vendor lock-in.
Among those who have deployed HCI, 6% have standardized on it, 7% have a few major deployments, and 20% have a few nodes. Top reasons for deploying or evaluating hyper-converged systems were simplifying management (48%), ease of scale-out (39%), and reducing hardware costs (35%). Leading use cases were databases, data center consolidation, enterprise applications such as customer relationship management (CRM) and enterprise resource planning (ERP), and virtual desktop infrastructure (VDI).
Surveyed IT pros noted the following business drivers for implementing software-defined storage: simplify management of different models of storage (55%), future-proof infrastructure (53%), avoid hardware lock-in from storage manufacturers (52%), and extend the life of existing storage assets (47%).
When DataCore conducted the IT survey in 2015, a lower percentage (45%) indicated they were trying to simplify management of different storage classes by automating frequent or complex storage operations.
Top use cases that IT pros noted for public cloud storage were long-term archive (35%), back up to cloud and restore on premises (33%), and disaster recovery (33%). A small percentage (11%) said they use the public cloud for primary storage, but a substantial percentage (40%) are not currently evaluating or using the cloud for storage.
For IT pros unwilling to move applications to the public cloud, the primary reasons are security (57%), sensitive data (56%) and regulatory requirements (41%). The applications they’re most willing to move to public or hybrid cloud infrastructure were select enterprise applications such as Salesforce (33%), data analytics (22%), databases (21%), and virtual desktop infrastructure (VDI), according to the survey.
DataCore said the 426 IT professionals who responded to the survey are using or evaluating software-defined storage, hyper-converged systems and cloud storage. The majority of the respondents (84%) were from North America and Europe.
NetApp quietly slipped an acquisition of storage memory software startup Plexistor into an earnings call otherwise noteworthy for strong results last quarter and a disappointing forecast for this quarter.
NetApp CEO George Kurian disclosed the Plexistor acquisition during the Wednesday night earnings call. NetApp did not include the acquisition in its press release or filing with the SEC, and provided no financial details.
Plexistor, which developed software that uses nonvolatile memory as primary storage, fits into NetApp’s strategy of trying to dominate in flash and other emerging storage technologies.
Plexistor came out of stealth in late 2015 with SDM – a software-defined memory product designed to deliver high-capacity nonvolatile storage at near-memory speed. The vendor chased customers running big data analytics and in-memory database processing.
SDM talks directly to a physical memory device, presenting DRAM and persistent storage in one namespace. It uses dual-inline memory module (NVDIMM) memory cards, NVMe flash and a spinning disk tier.
In late 2016, Plexistor bundled SDM on Supermicro servers and Micron NVDIMM cards in a product called Persistent Memory over Fabric Brick (PMoF Brick). PMoF Brick was aimed at big data analytics and high-performance NoSQL databases.
It’s unclear whether NetApp will sell Plexistor as a separate product or embed the technology into its flash storage. Kurian referred to Plexistor as “a company with technology and expertise in ultra-low latency persistent memory. This differentiated intellectual property will help us further accelerate our leadership position and capture new application types and emerging workloads.”
SDM runs on servers, making it a candidate for incorporation in NetApp’s coming hyper-converged product that will use SolidFire’s all-flash technology. NetApp has yet to formally launch the hyper-converged appliance.
The Plexistor acquisition is an example of how NetApp is trying to use flash – as well as the cloud – to bounce back after a tough two-year stretch. Kurian said the vendor has turned the corner, offering its results last quarter as proof.
NetApp’s revenue of $1.48 billion last quarter increased 7.3% over the previous year and beat the consensus Wall Street analyst expectation by $400 million.
Kurian called the last fiscal year “a pivotal year for NetApp. We started the year with bold commitments, and we delivered against all of them. We did what many said could not be done: return the company to growth while simultaneously expanding operating margins. With each successful step in our transformation, my confidence in our ability to create new opportunities and execute against those opportunities grows.”
NetApp’s problems may not be completely behind it, though. Its forecast for this quarter fell short of expectations. NetApp guided for $1.24 billion to $1.39 billion this quarter, which amounts to around two percent growth at the midpoint and falls below analysts’ expectations.
When Kurian became CEO two years ago, NetApp struggled with its flash strategy and with a disruptive upgrade process that slowed customer upgrades from its Data OnTap 7-Mode operating system to Clustered Data OnTap (CDOT). The vendor was stuck in a cycle of flat or declining revenue, which it didn’t snap until the final quarter of 2016.
NetApp now stands second behind Dell EMC in all-flash revenue and Kurian said most of the capacity on NetApp FAS arrays has moved to CDOT. He said 95% of FAS systems that shipped last quarter had CDOT installed. “The transition from 7-Mode to Clustered OnTap is now behind us,” he said.
The transition from disk to flash continues. Kurian called it the “early innings” in flash, and said NetApp’s all-flash revenue grew almost 140% last quarter. He said NetApp’s All-Flash FAS (AFF), EF Series and SolidFire are on pace for $1.4 billion in revenue over the next year.
“We are winning with flash and expanding our intellectual property in this market, positioning us for success in the multiyear transition from disk to flash,” he said
As for the low guidance, Kurian said NetApp would rather err on the side of caution. “We really are giving realistic and, in some cases, conservative estimates,” he said. “We want to make sure we meet or beat every commitment we made, as we have the last four quarters.”
Pure Storage executives claim the flash pioneer is on pace for $1 billion in revenue for 2017, plus its first profitable quarter by the end of the year.
Neither goal is assured, but both seem possible following Pure’s first-quarter earnings report Wednesday. The vendor reported $183 million, up 31% from last year. It cut losses only slightly to $62 million compared to $64 million a year ago, but execs claim spending decreases in the second half of the year, along with revenue growth, should bring it past break-even for the first time.
Pure forecast revenue of $214 million to $222 million this quarter and from $975 million to $1.025 billion for the year. That annual prediction includes a second-half surge in revenue and Pure will have to hit the midpoint of its annual revenue to achieve the $1 billion goal.
CEO Scott Dietzen cited the $1 billion and profitability goals and gushed “all of that is setting up 2017 to be Pure’s best year yet,” on the earnings call Wednesday night.
Of course, there is still a lot of 2017 left. Dietzen said he is counting on Pure Storage flash products taking advantage of hot industry trends. He expects to cash in on the emergence of Express (NVMe) solid-state drives (SSDs) with new Pure Storage FlashArray//X systems, continued growth of Pure Storage FlashBlade unstructured data storage, and the need for storage for the emerging private cloud, artificial intelligence and machine learning markets.
Dietzen said FlashBlade, which became generally available at the start of 2017, is selling at twice the rate FlashArray did when it first launched nearly six years ago. “FlashBlade is transforming the unstructured data market in the same way FlashArray revolutionized structured data,” he said.
Pure Storage FlashArray//X is another key to Pure’s achieving its goals. While Pure currently trails Dell EMC, NetApp and Hewlett Packard Enterprise in all-flash revenue, the vendor is looking to pick up organizations that want the improved speed of NVMe over current SSDs. Neither Dell EMC, NetApp nor HPE have an all-NVMe system yet, although all major flash storage vendors will eventually add NVMe.
“We see a new set of use cases that NVMe opens up,” said Matt Kixmoeller, Pure’s vice president of products. “Certainly, faster database type workloads … but we’re also really going after consolidation of cloud providers. A lot of the cloud vendors out there have really consolidated on server DAS over the past few years, and now we have an opportunity to go in there with NVMe and take the flash out of each of those servers and consolidate it at the top of the rack to drive more efficiencies for them.”
Dietzen said Pure is looking to become the storage of choice for “roughly 80% of enterprise workloads not currently a candidate for the public cloud.” He said enterprises are turning to private cloud in great numbers.
“While we occasionally compete with the big three public clouds … our customers use Pure’s data platform in conjunction with the public cloud, particularly for datasets that are too large to move across the internet,” he said.
NEW ORLEANS – Veeam Software opened its user conference with the launch of the latest version of its data protection software, moving deeper into cloud and physical device support.
Veeam Availability Suite v10 rolled out today at VeeamON adds continuous data protection, support for network-attached storage and native object storage. The object storage support includes data on Amazon Web Services (AWS) and Microsoft Azure. Veeam Availability Suite v10 frees up primary backup storage with policy-driven automated data management for long-term retention and compliance. The major upgrade to Veeam’s flagship product is in technical preview.
“V 10 takes [Veeam] to a multi-cloud world full speed,” said co-founder Ratmir Timashev. “V 10 is where everything comes together.”
Veeam Continuous Data Protection (CDP) replicates data to private or managed public clouds. The default recovery point objectives setting in Veeam Availability Suite v10 is 15 seconds. CDP is commonly found in data protection products, especially those emphasizing data recovery.
Other enhancements and features in Veeam Availability Suite include:
- Veeam Availability for AWS. Delivered through a partnership between Veeam and cloud backup and disaster recovery provider N2W, the feature offers cloud-native, agentless backup and availability to protect and recover AWS applications and data. Availability for AWS is geared toward helping enterprises migrate to and manage a multi-cloud or hybrid cloud environment.
- Veeam Agent for Microsoft Windows. Veeam announced this feature in August, but it became generally available Wednesday. It is designed to provide availability for Windows-based physical servers, workstations and endpoints, as well as Windows workloads running in public clouds.
Veeam previously released its Agent for Linux, which provides availability for public cloud and physical workloads hosted by Linux-based servers and workstations running on premises or in the public cloud.
“You need an availability strategy that’s going to extend beyond your virtualized workloads,” John Metzger, vice president of product marketing, said during the general session at VeeamON Wednesday. “Protecting workloads is important, but ensuring availability of those workloads is critical.”
Veeam Availability Suite includes Veeam Backup & Replication and Veeam ONE.
General availability of Veeam Availability Suite v10 is projected for late 2017. Pricing is not available at this time.
Veeam earlier this week announced changes to its executive team.
Check SearchDataBackup this week for more news out of VeeamON.
NEW ORLEANS — Veeam Software has changed CEOs for the second time in less than a year.
On the eve of the VeeamON user conference this week, the data protection software vendor elevated two executives into co-CEO positions as it strives to become a billion-dollar company.
Peter McKay, previously COO and president, and Andrei Baronov, co-founder and CTO, will serve as co-CEOs. McKay will retain his title as president and Baronov will continue as CTO. Former Veeam CEO William Largent moves into a new role as chairman of the company’s Finance & Compensation Committees.
The moves come 11 months after Largent replaced Veeam’s other founder, Ratmir Timashev, as CEO, and former VMware executive McKay joined the company as COO/president. Timashev remains with Veeam as a director of the private company.
McKay will lead Veeam’s “go-to-market,” finance and human resources functions, and work with Baronov to drive future growth, according to the company. The go-to-market strategy will specifically focus on the company’s continued expansion into the enterprise and cloud segments, as well as accelerating growth into the Americas and Asia/Pacific markets.
Baronov will oversee Veeam’s research and development, market strategy and product management functions. Largent will be responsible for the oversight of all corporate governance matters, tax structure, investment management and internal audits.
Founded in 2006, Veeam has a goal of becoming a $1 billion revenue company by 2018 and a $1.5 billion company by 2020, McKay said today at VeeamON.
Veeam recently reported its 2016 revenue bookings at $607 million.
“As we continue to grow and scale our business, we need to do it the right way,” McKay said.
Veeam reported about 2,500 global employees at the end of 2016 and is looking to add 800 over the next year, McKay said. The company plans to invest $126 million in marketing in 2017, which is about 20% of its revenue.
Veeam is looking to expand in four specific areas: geographic, platform (physical, virtual and cloud), segment (increased investment in SMB, commercial and enterprise markets) and partners.
Before joining Veeam, McKay was senior vice president and general manager of the Americas at VMware. He was also CEO of startups Desktone, Watchfire and eCredit.
“Peter took the company to the next level,” Timashev said of McKay’s first year at Veeam.
Veeam is growing faster and is more innovative now, Timashev said.
Veeam claims a total of 242,000 customers, and says it is adding 4,000 customers each month.
“There is an unbelievable opportunity in front of us,” McKay said. “We have to be bold.”
Nutanix is making its hyper-converged infrastructure (HCI) software available on another server platform, this time with the server vendor’s full cooperation.
Nutanix and IBM today disclosed an OEM deal for IBM to sell Nutanix HCI software on Power Systems servers. The deal gives IBM an HCI system and brings Nutanix beyond the x86 platform where hyper-convergence is dominant.
Greg Smith, Nutanix senior director of technical marketing, said IBM will sell Nutanix HCI software on IBM-branded turnkey appliances beginning sometime in 2017. This is different than Nutanix’s recent initiatives to make its HCI software available through channel partners with Cisco and Hewlett Packard Enterprise servers. Neither Cisco nor HPE were willing partners, as both sell competitive products.
“This allows our software to run on a different class of server,” Smith said. “We have done well on x86 platforms, and this allows us to venture into a different market segment. Power systems are used for more advanced big data, machine learning and AI cognitive workloads. These are demanding applications that demand high performance.”
Unlike x86 systems running Nutanix software, the IBM HCI systems will only support the Nutanix Acropolis Hypervisor (AHV). Nutanix started out supporting only VMware hypervisors, and most of its customers still use VMware virtualization. But Smith said AHV will be incorporated on all of the IBM Nutanix systems. Smith said AHV is designed for cloud-native applications and running microservices and containers.
“The objective is for [IBM] customers to run the Nutanix AHV hypervisor,” he said.
IBM sold off its x86 server platform to Lenovo and sees need to enter x86 HCI market, according to IBM storage general manager Ed Walsh. In an interview with TechTarget in February, Walsh said IBM’s converged infrastructure platform provides the same benefits to customers as x86-based HCI.
New Dell EMC cloud storage has appeared on the horizon, providing a silver lining for IT-pressed healthcare shops.
Virtustream Healthcare Cloud, which launched this week at Dell EMC World 2017, is a secure compliance archive for electronic medical records. The Virtustream cloud is built on Pivotal Cloud Foundry software running atop Dell EMC storage.
The vendor said the Virtustream Healthcare Cloud hosts mission-critical data sets in a HIPAA-and HITECH-compliant environment, with managed services and guaranteed five nines of availability. Customers use software to tier data from on-premises Dell EMC storage to the Virtustream cloud.
“We’re looking to do the same things with the healthcare cloud that we’ve done in the SAP database world: consumption billing, performance with availability guarantees, built-in disaster recovery with RPOs and RTOs, and a full managed services capability on top,” said Matt Theurer, a Virtustream founder and its senior vice president of product management.
Startup Virtustream launched in 2009 to provide public cloud hosting of legacy applications that were not written for the cloud, such as SAP Hana. It became part of EMC via a 2015 acquisition. EMC subsequently ported its Rubicon project into the Virtustream cloud to create the Virtustream Storage Cloud object platform.
Rubicon turns the Virtustream cloud into a target for underlying Dell EMC storage through Isilon CloudPools, Data Domain CloudBoost and CloudArray for Dell EMC Unity and VMAX all-flash arrays.
Dell EMC also rolled out the Virtustream Enterprise Cloud Connector for the VMware vRealize Automation suite. Theurer said customers can use Virtustream as an endpoint for cloud bursting or tiering to support evolving availability, disaster recovery, performance and security requirements.
BOSTON — Storage is rarely a focal point at OpenStack Summit keynotes, so it was interesting this week to see a Cinder block storage demo — even if it failed — and Edward Snowden discussing data in the cloud.
The OpenStack Cinder demo hit a technical glitch, but the live video feed from Moscow with the former National Security agency contractor went off without a hitch.
Snowden left the U.S. after his 2013 leak of more than a million documents revealed extensive domestic surveillance operations. He told OpenStack Summit attendees they could help the people who make the decisions on how to build the infrastructure-as-a-service layer — which he said is “increasingly becoming the bones of the Internet.
“You could use [Amazon’s] EC2. You could use Google’s Compute Engine or whatever. These are fine, right. They work. But the problem here is that they’re fundamentally disempowering,” Snowden said. “You give them money, and in exchange you’re supposed to be provided with a service. And that exists. But you’re actually providing them [with] more than money. You’re also providing them with data, and you’re giving up control. You’re giving up influence. You can’t reshape their infrastructure.
“They’re not going to change things and tailor it for your needs,” he continued. “And you end up reaching a certain point where, OK, these are portable to a certain extent. You can containerize things and then shift them around. But you’re sinking costs into an infrastructure that is not yours fundamentally.”
He cautioned that, when running on the stacks of Google or Amazon, “How do you know when it starts spying on you?” Snowden asked. “How do you know when your image has been passed to some adversarial group, whether it’s just taken by an employee and sold to a competitor, whether it’s taking a copy for the FBI, whether legally or illegally. You really don’t have any awareness of this, because it’s happening at a layer that’s hidden from you.”
Snowden said OpenStack could make users “lose that fundamental, inherent silent vulnerability of investing into things” they don’t influence, own, control or shape. He said OpenStack requires “a little bit more of a technical understanding” to build layer by layer and “continues to comply with this very free and open set of values that the open source community, in general, drives all over the place.
“We can start to envision a world where cloud infrastructures are not private in the sense of private corporations, but private in the sense of a person,” Snowden said, where a small business, a large business or a community of technologists could own, control and shape OpenStack and “lay the foundation upon which everybody builds.
“And I think that’s probably one of the most powerful ideas that shapes the history of the internet and, hopefully, will allow us to direct the future of the internet in a more free rather than a more closed way,” Snowden said.
Cinder demo problem
The Cinder block storage service factored into an OpenStack Summit demo gaffe in the context of explaining open “composable” and cloud-native infrastructure. The snafu came during an attempt to show how to run Cinder as a stand-alone service using Docker Compose to spin up containers.
John Griffith, a principal software engineer at NetApp, later explained the problem he confronted on stage: “There’s an interesting race condition that in all of our rehearsals we never hit, where the scheduler container would come up before the database container was actually ready to receive requests,” he said. “And so it would crash the scheduler container.”
Griffith said he had never encountered the problem before, despite running “this exact demo probably at least a hundred times” before the keynote.
“Unfortunately, when you’re doing a keynote live demo in front of a few thousand people, you don’t have the liberty or luxury to just [say], ‘Hey, let me try this again,’ ” Griffith said.
Kendall Nelson, an upstream developer advocate with the OpenStack Foundation, said the demo ran perfectly twice on the morning of the OpenStack Summit keynote and at least a half dozen times the day before.
Nelson said the takeaway would have been that users could deploy Kubernetes and Docker with OpenStack, and use OpenStack services such as Cinder stand-alone, without additional services such as Nova compute.
“Really, one of the most important things to take away from that, too, is the fact that Cinder actually, by itself, can be extremely easy for somebody to deploy and use,” Griffith said. “Somebody could actually download that Compose file and run that Compose file on their own and have an up-and-running Cinder deployment.”
Griffith said developers are increasingly realizing a need for persistent block storage with containers.
“There are, of course, people that say the world should be ephemeral, and there’s no persistence. The reality is that’s not the world we live in,” Griffith said. “Databases are pretty useless if they don’t have any data in them. OpenStack has been working on storage for a long time. The container space hasn’t. So this is actually an opportunity. ‘Hey, here is a storage service. You can plug this in, and now all you have to do is focus on your APIs.'”
Beleaguered software vendor FalconStor Software reported a cash flow positive first quarter of 2017, although that highlight came more on the back of spending cuts than improved sales.
FalconStor revenues actually dipped in the quarter to $6 million after posting $7.4 million in the previous quarter and the first quarter of 2016. The vendor takes solace in a slight uptick in its FalconStor FreeStor software revenue, to $1.6 million compared to $900,000 a year ago.
The company has undertaken cost-cutting initiatives that resulted in non-GAAP expenses decreasing to $7 million compared to $8.1 million in the previous quarter and $10.7 million in the first quarter of 2016.
“We are pleased that we achieved our goal of being cash flow positive for the quarter,” said Daniel Murale, FalconStor’s vice president of finance and interim chief financial officer. “As of March 31, our cash and cash equivalents balance increased 1% to $3.4 million as compared with Dec. 31, 2016. Our No. 1 goal as a company is to continue to preserve our cash balance.”
Murale said FalconStor has gone from 224 employees a year ago to 165 at the end of last quarter. “We continually look to optimize our cost structure,” he said.
That doesn’t mean FalconStor was profitable. The vendor lost $1.1 million in the quarter. But that’s an improvement over a $4.3 million loss a year ago, and FalconStor CEO Gary Quinn said customers are heating up to FalconStor FreeStor.
The vendor claims 360 FalconStor FreeStor customers as it tries to revitalize its business that incurred serious setbacks, including the 2011 suicide of its founder and CEO ReiJane Huai.
The Melville, NY-based company reported $5.5 million in bookings compared to $8.4 million in the previous quarter and $7.4 million it generated in the first quarter of 2016. FalconStor reported that OEM partner Hitachi Data Systems accounted for 10% of its total revenue.
FalconStor FreeStor is building block to growth
FalconStor, an early storage virtualization vendor, is trying to rebuild itself with its FreeStor storage virtualization. FalconStor FreeStor provides block-based services such as data migration, protection, recovery and analytics for heterogeneous storage.
“Our overall performance still requires some improvement,”said Gary Quinn, FalconStor’s president and CEO. “We have been able to grow FreeStor, but just not at a rate faster than the legacy product is weakening. Many of our legacy customers were larger virtual tape library (VTL) customers who have chosen to update their backup solutions with modern snapshot technology. In many cases, those VTL customers were part of a field-based OEM partners’ sales team, which meant FalconStor did not have direct contact (with) those customers.”
Quinn offered some insight into the storage market when asked by an analyst if the current market was “still hazy.”
“You could say you’re kind of underwater in a pond with a lot of algae at the moment with a snorkel,” Quinn said. “It’s fairly tough. I mean, there are a couple of people … I think Commvault had a pretty good result last quarter. I think they’ve finally got some traction going, but I think overall, for most people in the storage industry, it’s still a fairly significant slog out there.”
Commvault is getting into the hyper-converged backup game.
Commvault CEO Bob Hammer outlined the vendor’s product plans for the coming months Wednesday, May 3 during its quarterly earnings call. He said hyper-converged reference architectures for secondary storage are in the company’s plans, along with an enhanced platform for the cloud, new service offerings for endpoints and Commvault managed services. The company also plans to enhance the Commvault Data Platform with business analytics.
Hammer didn’t give enough specifics to tell if the hyper-converged backup products will be similar to converged platforms such as Rubrik and Cohesity. He will leave the details to the actual product launch. But he did lay out Commvault’s strategy.
“We are launching this quarter our move into secondary storage with a whole series of leading-edge hyper-converged solutions,” Hammer said. “And right on the back of that are a series of new standalone solutions, and right on the back of that is analytics.”
Hammer said Commvault’s hyper-converged backup reference architectures will handle snapshots, replication, archiving and copy data management. In addition, he said, scale-out hyper-converged backup configurations will be available this fall. The company will also expand with managed services for backup, archive and endpoint offerings.
“There is a massive trend in the industry to move away from legacy infrastructure to more commodity cloud-like infrastructures,” he said. “The key commoditized hardware components will be managed by software…Our approach to hyper-converged storage is unique, since it combines Commvault Platform’s comprehensive index knowledge of the data with the management of the backend storage.”
Commvault is building business analytics into its data management platform by adding search capabilities that can be fed into analytics engines.
“It also includes embedding machine learning and other artificial intelligence capabilities into our platform,” Hammer said.
Continued growth depends on more large deals
Commvault reported total revenue of $172.9 million last quarter, an increase of 8% year-over-year and a sequential increase of 4%. Commvault posted $650.5 million in total revenues for the full fiscal year, an increase of 9% compared to fiscal 2016.
Software revenue of $84.7 million in the quarter was an increase of 15% year-over-year and a 10% sequential increase. Services accounted for $88.2 million for fiscal 2017, an increase of 2% year-over-year and a flat sequential increase.
Total software revenue for the full fiscal year was $296 million, an increase of 15% compared to fiscal 2016. Services revenue for the full fiscal year was $354 million, which was an increase of 5% compared to 2016.
Commvault reported a net income of $3.2 million for the fourth quarter of fiscal 2017, and a net income of $500,000 for the full fiscal year. Hammer said the objective for this year is to further improve licensing revenue through the enhanced product portfolio “focused on market-leading solutions for customers dealing with big three trends in the market.”
Those three market trends include the cloud, IT infrastructure modernization and business analytics. Commvault has been working on digging itself out of a sales slump that began in 2014. In a previous earnings call, Hammer said the company still faces some critical challenges and continued growth depends on its ability to win more large deals. A lot of its success will turn on releases of new Commvault products.