The flurry of activity surrounding on-premise object storage specialist Cloudian suggests how rapidly the market has been moving during the last year.
Cloudian CEO Michael Tso said 2017 was the startup’s best year, as revenue grew by a factor of 3x and the customer count soared past 200. He said 89% of Cloudian object storage sales came through value-added resellers by the fourth quarter, and the trend continued into 2018 – a significant jump over the 25% in third-party sales in 2015 and 36% in 2016.
“That really is a signal to me that the product is ready for a broader channel,” Tso said.
Cloudian launched in late 2011 with a focus on prominent Japanese telcommunications/service provider customers NTT East, NTT Communications and Nifty that needed multi-tenant, geographically distributed storage after the Fukushima earthquake.
Tso said he expects Cloudian to become profitable over the next few years, with a possible IPO down the road. But he said growth is more important than profitability right now.
“Our board and our investors are telling us to grow just as fast as we can and don’t worry about profitability,” he said. “I think if we stopped growing as aggressively as we are, we either would be able to be profitable this year, or if I look at the numbers carefully, maybe we could have been profitable even earlier. We are looking at a potential IPO probably three to four years down the road. We are not really in a hurry. We expect to be profitable before that.”
Cloudian is concentrating on partnerships with channel and OEM partners to grab a significant share of the storage market. The object vendor followed its 2016 OEM deal with Lenovo with an EMEA-based joint reseller agreement with Hewlett Packard Enterprise in late 2017. In 2018, Cloudian partnered with Machine Box on a machine-learning option and made available a Cloudian object storage “HyperStore Test Drive” for Google Cloud Platform.
Also, late 2017 conversations with Cisco Systems led to a significant investment earlier this year from Digital Alpha, a private equity firm started by former Cisco executives. Digital Alpha made a $25 million equity commitment to Cloudian and set up a utility financing facility of up to $100 million.
“The goal for the $100 million is to set up a separate company that would purchase appliances and solutions from Cloudian and be able to provide those to the end user through a paper drink consumption model,” Tso said. “They will add more gear when you need it, and they’ll remove gear when you’re trying to take it away. It’s just like the way cloud works except it’s cloud being put into your own data center, because our product is only sold into on-prem environments.”
Cloudian expanded in March with the acquisition of Infinity Storage, an Italian file-based software-defined storage vendor. Cloudian already used Infinity’s technology in its HyperFile appliance that combines file and object storage.
“They make an NFS/CIFS front end that can move data into object storage or into the cloud,” Tso said of Infinity Storage. “We partner with every one of the gateway companies out there, but we weren’t really happy with any of their solutions. The problem with a lot of products out there is that they’re not in the kernel space. File systems have traditionally always been done inside the kernel. It’s really the only way to do it that’s really robust, but it’s very hard. We spent a year testing pretty much every vendor in the market, and we eventually came on this small company based out of Milan. They’ve been doing it for over 10 years.”
Crossbar recently chalked up another milestone in its quest to get on-chip non-volatile resistive RAM (ReRAM) technology to market.
The Santa Clara, California-based startup licensed its core ReRAM intellectual property to Microsemi, a semiconductor supplier to the military and aerospace industry. The companies plan to collaborate on the research, development and application of the Crossbar ReRAM into Microsemi products designed for 16, 14, and 12 nanometer (nm) process nodes.
ReRAM is a type of non-volatile memory that consumes less power and offers faster reads and writes, higher endurance and greater storage density than NAND flash. But Crossbar does not position its ReRAM as an alternative to technologies such as flash-based solid-state drives (SSDs). The initial use cases for ReRAM will more likely be under the covers in CPUs, field programmable gate arrays (FPGAs), and system-on-a-chip (SoC) architectures, possibly as an alternative to slower static RAM (SRAM) or less energy-efficient dynamic RAM (DRAM), according to Sylvain Dubois, vice president of business development and marketing at Crossbar.
Dubois expects the embedded Crossbar ReRAM technology to reach products that businesses or consumers might use in 2019.
ReRAM use case example
In the meantime, Crossbar demonstrated potential use cases in “artificial intelligence applications at the edge” – designed for devices such as surveillance cameras and mobile phones – at the recent Embedded Vision Summit in San Jose, California. The startup showed off ReRAM test chips with applications designed to recognize faces and license plates. Dubois said the test chips integrate the algorithms and the database, enabling classifications to be done efficiently at low latency and low power, with no need to communicate with a distant cloud-based database.
Alternative technologies that a designer might have chosen for such applications include static RAM (SRAM) and dynamic RAM (DRAM), according to Dubois. But he claimed SRAM would have been slower and DRAM would have consumed more energy than the Crossbar ReRAM.
Microsemi did not disclose the types of products that might incorporate the Crossbar ReRAM technology nor the foundry with which it is working. The company’s product line includes PGAs, controllers, communications chips and artificial intelligence computing chips.
“What is unique is that this Microsemi deal is targeted at 1x nanometer,” Dubois said. “Getting to the point where you can prove the scalability is a major milestone. Now we have not only a customer but we also have a foundry that is getting access to embedded ReRAM at 1x nanometer.”
Dubois said Crossbar started its non-volatile ReRAM commercialization phase on 40 nm process nodes with Semiconductor Manufacturing Corp. (SMIC). He said Crossbar also works with two other foundries and licenses its ReRAM to about a dozen chip designers.
Crossbar ReRAM nanofilament technology is built on standard complementary metal oxide semiconductor (CMOS) processes, and the company claims it will scale to below 10 nm without impact to performance.
“The big trick is to get into high volume production because the most important thing in semiconductors is to get the volume high enough so you can drive the costs out,” said Jim Handy, general director and semiconductor analyst at Objective Analysis.
Handy said the Microsemi agreement with Crossbar shows the semiconductor supplier took a hard look at the ReRAM, decided it’s a good technology, and expects other companies to sign on. He said the 1x nanometer process achievement indicates Crossbar could be looking at use cases where NOR flash won’t work any longer due to scaling issues.
“Microsemi supplies stuff for aerospace, and in space, there’s a lot of radiation. Radiation tends to cause NOR flash, or any flash, to lose its content. The radioactive particles go through the chip and drag the electrons out of the floating gate,” Handy said. “The Crossbar metal filament technology doesn’t use electrons. And nuclear particles can go zipping through the chip and not destroy bits.”
Handy said enterprise users aren’t likely to notice ReRAM in their servers or storage systems because it would be buried in places they won’t see. But he said Crossbar ReRAM could lower the cost of SSD controllers marginally by allowing them to scale better.
“Now 80% or more of the cost of the drive is the NAND flash. So this is going to make a small difference, but it will make a difference,” Handy said.
Komprise struck its first major reseller agreement, partnering with IBM to offer Komprise Intelligent Data Management software with the vendor’s storage portfolio.
Customers can use Komprise software to move file-based data from network-attached storage (NAS) systems to IBM cloud-based object storage, according to Krishna Subramanian, co-founder and chief operating officer at Komprise.
Users can download the Komprise Observer virtual appliance and point it at their NAS systems. The Komprise software analyzes how much hot and cold data they have and the file growth rate to help them set policies to move data to on-premise or off-site IBM Cloud Object Storage for archival or disaster recovery (DR) purposes. The Komprise Director management console, which can run at the customer’s data center or in the cloud, displays the potential savings considering NAS, backup, DR and new storage target costs.
The Komprise software also handles the file-to-object mapping and non-disruptively migrates the data from the NAS system to the IBM Cloud Object Storage. End users can still access the data with the same permissions and metadata from their source NAS systems, even though the files have been transferred to cloud-based object storage on the back end.
“When we move a file out of NAS into object storage, we put in a link in that NAS so when a user goes to open the file, it looks like the same file. It has all the metadata properties and everything,” Subramanian said. “A Komprise Observer will actually respond, and it will map the object back to file and return it. But that whole handshake is transparent to users and applications.”
Komprise vs. cloud gateways
Subramanian said, unlike many cloud gateway appliances, Komprise does not aim to shift all data to the public cloud and cache the hottest data in local appliances. She said that approach can become expensive if the customer needs to retrieve data from the cloud.
“We’re simply providing a more cost efficient way to manage the data. Essentially we’re saying, ‘Look, your NAS is great for your hot data. But for the 80% of your data that is rarely getting cached, and you need to keep either for business or compliance reasons, let us move that to a cost-efficient store,’” Subramanian said.
The Komprise software also enables customers to migrate data from one NAS system to another NAS system, if they want to replace or decommission file-based storage devices.
Subramanian said Komprise and IBM had joint customers in industries such as financial services, insurance and health care storing large volumes of data. Komprise is part of the Ready for IBM Storage and Ready for IBM Cloud validated solution directory. The new worldwide reseller agreement applies to IBM product sales and services teams as well as IBM channel partners, Subramanian said.
Komprise prices its software based on the amount of data under management. The startup offers a subscription model, at about a half penny per GB per month, or a perpetual license, at $120 to $130 per TB, according to Subramanian.
Earlier this month, Komprise shipped a new 2.8 software update. New features included support for European euros, British pounds and Japanese yen in the product’s ROI calculator, which originally displayed cost savings only in U.S. dollars. The 2.8 product upgrade also enhanced the NAS migration capabilities for SMB environments, adding the ability to preserve access control.
Hyper-converged pioneer Nutanix has a history of rapid revenue growth and wide losses. For last quarter, both of those were larger than expected.
Nutanix revenue grew 41% year-over-year to $289 million — $9 million above the top end of its previous forecast. Nutanix also lost $35 million, larger than expected and much more than the $20 million loss a year ago.
For this quarter, Nutanix forecasted revenue from $295 million and $300 million but heavier losses than analysts expected.
In an interview following the earnings call last Thursday, Nutanix CEO Dheeraj Pandey said the company’s move to a software-centric business model was partly to blame for the earnings miss. Nutanix is two quarters into the model, which no longer credits revenue from partners’ hardware sold with Nutanix software. Nutanix had $12 more in deferred revenue than expected last quarter due to the accounting change. Pandey said changes in the billing to revenue ratio makes it harder to accurately forecast revenue during the transition.
“That’s part of the software transition,” he said. “We don’t fully have an exact view of billings to revenue as we go through this transition.”
Nutanix also went on a hiring spree, adding more employees than in any previous quarter. CFO Duston Williams called it a “full court press” on hiring. That included the addition of 60 sales teams and 85 new employees through acquisitions of Netsil and Minjar. That helped run Nutanix’s expenses to $232 million in the quarter, $12 million above its guidance.
Nutanix remains in a growth stage, hoping to take advantage of the still rapidly expanding hyper-converged infrastructure (HCI) market. According to IDC, the hyper-converged market expanded 69.4% year-over-year for the fourth quarter of 2017. That should leave plenty of room for future Nutanix revenue increases.
“We are still discovering the total addressable market (TAM) of this architecture,” Pandey said. “Five years ago, nobody gave hyper-convergence a $100 million TAM but now it’s already close to $10 billion. A lot of the TAM expansion depends on category creators like Nutanix. We go and expand workloads, geographies, different kinds of mission critical applications, hardware plans on which we run. We keep expanding that, and the TAM should continue to grow.”
Much of that expansion is due to traditional storage systems becoming hyper-converged. Pandey said about 65% to 70% of Nutanix deals come from converting traditional storage products to HCI.
“Other HCI competitors don’t have that much HCI focus,” he said, referring to server vendors that have moved into HCI. “They have other three-tier products that they still sell, while we have conviction in our architecture.”
Despite its losses, Nutanix has more than $900 million in cash. Pandey said the vendor will continue to increase spending to add features and new products from internal development and acquisitions. The acquisition strategy will continue to focus on small deals that add key pieces of technology rather than large established companies.
“We’re not in the business of buying growth,” Pandey said. “We won’t just go and buy a customer base. We’re looking for companies that have great technology and awesome people that want to build this company into something bigger.”
Nutanix AHV hypervisor adoption is growing, although it doesn’t directly add to revenue because it is part of the Acropolis software stack. Nutanix claims 33% of its HCI nodes sold last quarter used AHV compared to 30% in the previous quarter. Pandey said AHV saves people from having to buy VMware licenses, but new features also sway customers.
“Well, I think we don’t lead with cheap,” he said on the earnings call. “Nutanix is not that cheap of a product. If anything, we get a lot of flak for being a premium product, but we lead with ease of use and that our stuff works.”
“We’re making microsegmentation simple to use, so you don’t have to pay for NSX,” he said, referring to VMware’s software-defined networking product. “Our microsegmentation is one-click, and one node at a time.”
BOSTON – At the first ZertoCON in 2016, analyst John Morency said that “IT resilience” is becoming the new “disaster recovery.” The concept at the time stressed continuous availability and proactively avoiding all recovery situations, versus just being able to recover from huge disasters.
Two years later, at ZertoCON 2018, IT resilience was the dominant theme. But that concept itself is evolving.
“The definition has changed,” Morency, a research vice president at Gartner, said in his keynote address Wednesday. “The scope has changed.”
Gartner defines resilience as “the ability of an organization to protect, absorb, recover and adapt in a complex and rapidly changing environment to enable it to deliver its objectives and to rebound and prosper.” That’s different from the classic recovery model with a focus on recovery time and recovery point objectives, Morency said.
“Backup can only take us so far,” Morency said.
Zerto’s new Elastic Journal, for example, which is scheduled for release with Zerto 7 in early 2019, provides continuous recovery points across data, files or virtual machines, going from seconds to years back. Zerto pitched it as a new way to do backup. The feature is a part of Zerto’s newly branded IT Resilience Platform.
“We don’t ever go to our backup solution for recovery,” said senior system engineer Jayme Williams, of material manufacturer TenCate, a Zerto customer since 2012.
Instead, TenCate uses Zerto for its journal file-level recovery.
Data protection criteria are changing, Morency said. In the past, functionality was the most important for products. Now it’s cost, ease of use and capability to support multiple data protection uses.
Needs and planning tips for IT resiliency
According to Morency, governance requirements driving organizations’ need for IT resilience include:
- close-to-continuous IT and business operations
- workload mobility
- sustainable data integrity, consistency, availability and accessibility
- cyberthreat mitigation
- IT service configuration, deployment and change agility
- detection of and response to potentially disruptive events in order to sustain business and IT operations
Ransomware is changing the game for cyberthreats. Speakers at ZertoCON 2018 noted that a ransomware attack is a “when” not “if” scenario. And traditional backup — with recovery point objectives measured in hours — may not cut it, as organizations will want to recover from just before the attack hit.
Gartner estimates that by 2020, 30% of organizations targeted by major cyberattacks will spend more than two months cleansing backup, resulting in delayed recoveries.
IT resilience management is also driving product convergence: in backup software, runbook automation, software-defined managers and cloud management platforms.
“It’s not about backup. It’s not about runbook automation,” Morency said. “It’s all of the above.”
For many organizations, the IT resilience scope is hybrid. According to Gartner research, nearly 80% of organizations say their data center capacity profile in five years will include some combination of on-premises and cloud.
Morency provided an action plan for organizations looking to begin an IT resilience journey:
- Monday morning: Benchmark your organization’s resilience and identify people, process and technology gaps that are specific to the support of mission-critical business processes and applications.
- 30 to 90 days: Develop and execute a plan for improving relevant resilience gaps for mission-critical processes and applications.
- 90 to 180 days: Prioritize gap closure for critical and important processes and applications; mitigate the resilience risks posed by key vendors and service providers.
NetApp CEO George Kurian says things have never been better for the storage vendor since he joined the company in 2011.
Kurian maintains NetApp’s all-flash platform is a hit, it’s the first major vendor with end-to-end NVMe array, it has solid connections with all major cloud vendors and now has a viable hyper-converged product. And he said NetApp is still making two SAN array displacements a day, while its main rival Dell EMC is struggling with its midrange storage and cloud strategies.
“What a difference a year makes,” Kurian said during NetApp’s Wednesday evening earnings call, reporting a better than expected 11% year-over-year revenue increase to $1.64 billion last quarter. “We improved the consistency of our results, expanded our market opportunities, and successfully accelerated our momentum. We are undoubtedly in the best position since beginning the transformation of NetApp.”
NetApp’s forecast failed to capture Kurian’s optimistic words, though. Investors were disappointed by NetApp’s guidance range of $1.365 billion to $1.465 billion for this quarter, a significant drop from last quarter. The stock price fell 3.9% to $66.79 Wednesday after the earnings report and then dropped more to $64.00 at today’s opening.
When asked about the tepid guidance, Kurian said: “We are very bullish on the strength of our product portfolio. Our philosophy is to build a plan that we can meet or beat and provide you more updates as we see more visibility through the course of the year.”
When Kurian became NetApp CEO three years ago, the vendor was a laggard in the emerging all-flash market. He said all-flash revenue grew 43% year-over-year last quarter and is at a $2.4 billion run rate for the year, putting it at or near the top of the overall storage market for that segment. He said most new arrays that ship now are all-flash configurations.
NetApp this month launched its All-Flash FAS (AFF) 800, part of a wave of NVMe arrays from major vendors. Dell EMC, Hewlett Packard Enterprise, Pure Storage and IBM also have new NVMe arrays out or coming, as do several startups specifically targeting that market.
Speaking of competitors, Kurian said Dell EMC still has a lot of work to do following its 2016 merger. He said the market leader needs to do more than rationalize its overlapping midrange arrays to stem share losses.
“I think what Dell has to do is not only rationalize their portfolio, but then to develop a coherent cloud strategy. That takes years of work. They’re years behind on everything from flash to cloud,” he said.
NetApp’s cloud strategy revolves around its Data Fabric and Cloud Volumes, which make its file services available in public clouds. NetApp Cloud Volumes is generally available for Amazon Web Services and in private previews for Microsoft Azure and Google Cloud Platform.
“The opportunity created by this part of our business is incredibly exciting,” Kurian said.
Kurian said NetApp’s FlexGroup feature that allows customers to cluster FlexVols has helped the vendor win deals from Dell EMC’s Isilon scale-out NAS product. “We have taken back several footprints from Isilon, and frankly, they’re trying to chase us now,” he said.
NetApp HCI is still in its early days, less than a year after launching. Kurian said the vendor is not chasing traditional hyper-converged use cases but concentrating on enterprises running mixed workloads. NetApp uses its SolidFire all-flash platform as the storage for HCI.
“We are not targeting the entire hyper converged market, but a very specific large segment of it where we think we’ve got a winning architecture,” he said.
Should we believe Kurian’s words or NetApp’s guidance? Either he is not as optimistic as he sounds, or he is setting things up for NetApp to beat expectations again. Check back in three months to find out.
Hewlett Packard Enterprise extended its impressive storage turnaround last quarter.
For the second straight quarter, HPE storage revenue increased 24% year-over-year – jumping to $912 million for the period. Now, that includes revenue from Nimble Storage that HPE didn’t own the year before, so that 24% is inflated. But HPE’s organic growth – without Nimble revenue – increased 14% from last year. That’s better than the 11% organic growth from the previous quarter well above the overall industry growth.
When the $1.2 billion Nimble deal closed in April 2017, HPE storage was a rock bottom. It declined 12% year-over-year in the first quarter of 2017 and dropped 13% year-over-year in the second quarter. All-flash sales for the HPE storage flagship 3PAR platform were below its competitors’ all-flash growth and the company pointed to “execution” problems in the wake of the Hewlett-Packard breakup.
A year later, HPE storage is soaring. Its all-flash growth of 20% remains below rivals such as NetApp and Pure Storage, but the InfoSight storage analytics that HPE gained in the Nimble deal helps the vendor stay ahead of the rush to use artificial intelligence in IT management. HPE has extended InfoSight to 3PAR as well.
HPE CEO Antonio Neri said his company has gained market share in storage in 10 of the last 12 quarters.
“We actually executed way better than last year,” Neri said of the HPE storage team. “Last year, we had some execution challenges, particularly North America. We think we have addressed those issues. And when they think about the opportunity, the market, obviously, all-flash continue to be a significant opportunity. This quarter, we grew 20%. And we are really excited about our portfolio. With our AI technologies built into both in Nimble and 3PAR, that is something that’s resonating with customers.”
Neri pointed out HPE storage is unlikely to increase 24% year-over-year next quarter because the third quarter will include Nimble revenue from 2017. But if it can keep going with its double-digit organic increase, HPE will almost definitely continue to take market share.
HPE stood second behind networked storage leader Dell EMC in the latest IDC numbers, from the fourth quarter of 2017. According to IDC, overall networked storage revenue grew 1.4% year-over-year in the fourth quarter of 2017 and 4.1% in the quarter before that. All-flash array revenue grew 38.1% in the third quarter and 15.1% in the fourth.
HPE also reported it more than doubled revenue from its SimpliVity hyper-converged platform, although that product was in its early days under HPE’s banner a year ago. HPE acquired SimpliVity for $650 million in Jan 2017.
SAN FRANCISCO – Pure Storage will push the theme of a “data-centric architecture” at its annual Accelerate user conference that begins Wednesday.
Data-centric architecture is Pure’s description for its new flash strategy. The strategy revolves around the concept that the storage array is becoming a commodity item, an afterthought for IT. Enterprise data centers instead want fast flash to deliver data as a service to any type of application.
Among the expected product highlights is a major upgrade to the flagship Pure Storage FlashArray block and file system, featuring a handful of highly dense models that extend nonvolatile memory express (NVMe) rack-scale flash across the product line.
This will be the first Pure Accelerate under new CEO Charles Giancarlo, who replaced Scott Dietzen last April. Dietzen remains as Pure’s charman.
Shared accelerated storage: Jargon or meaningful distinction?
Pure Storage has had an eventful year so far. Pure became a public company in 2015, and this year capped off a pair of key milestones: $1 billion in sales and achieving non-GAAP profit. Pure Storage launched in 2009 as one of the first vendors to sell only all-flash arrays.
Pure now wants to shift the focus away from hardware specs to software-defined storage features that exploit advances in flash technology. With the launch of Pure AIRI this year, the vendor moved into artificial intelligence.
Pure wants to lump its all-flash arrays under a recently developed hardware category known as shared accelerated storage. The term was coined by IT analyst firm Gartner to describe hardware gears equipped with NVMe over Fabrics capabilities.
“Our vision for data-centric architecture is that IT organizations need to think less about managing storage and more about being storage service providers to the rest of the organization” said Matt Kixmoeller, Pure’s vice president of strategy. “That’s a bit of a different mindset than just buying and running storage arrays.”
At Pure Accelerate in 2017, the vendor previewed a FlashStack converged infrastructure product based on Cisco servers and networking. The first iteration of that product will be made generally available this week.
As all-flash array pioneer Pure Storage celebrates its $1 billion in annual revenues, two of its former executives are planning the next big thing in flash.
Pavilion Data, whose CEO Gurpreet Singh and VP of global sales Dan Heydenfeldt came from Pure, today picked up $12 million in funding to market its NVMe over Fabrics (NVMe-oF) storage system. Singh said Pavilion is going after customers running applications built on a “new modern stack dominated by open source, massively parallel, scale-out, clustered databases and file systems.” In other words, it’s targeting storage for databases such as MongoDB, Spark, MySQL and Cassandra instead of Oracle and Microsoft SQL Server.
“Somebody gets to build the next billion dollar company riding on this modern data stack,” Singh said. “We believe we have the best architecture for these modern applications. We call it disaggregated shared storage, or rack-scale flash.”
Singh said current all-flash arrays — including Pure’s — are fine for traditional applications but not modern apps. “The old-school dual controller architecture, server-centric design exposes a lot of challenges when running these applications,” he said. “For example, performance density is just not there.”
He said a storage system to run these new apps must have performance, latency and bandwidth characteristics of direct attached storage yet easy to scale and use. “Today there are compromises,” Singh said. “You can go shared storage, but you lose performance. Or you stick in four to six NVMe cards per server, 40 servers per rack, tens of racks and you lose the serviceability and data management. “It requires a complete re-think of how you develop and architect a storage system. You can’t retro-fit that. The basic math doesn’t add up.”
Pavilion’s answer is the Pavilion Memory Array, which started shipping in early 2018.
Pavilion Data CTO VR Satish said a 4u Pavilion Memory Array can drive 120 GBps second with around 100 microseconds of latency and 20 million IOPS. The system uses x86 hardware and up to 72 standard 2.5-inch NVMe flash drives for a maximum capacity of 1 PB.
The back of the box resembles a network switch with a minimum of two line cards, each card containing four 100 Gigabit Ethernet ports and two controllers. Customers can expand to 10 line cards, 40 GigE ports and 20 controllers.
The array does not scale beyond one system, but Satish said 1 PB of storage with 72 drives “is more than enough.”
The system supports RDMA over Converged Ethernet (RoCE) and NVMe over TCP but no Fibre Channel.
“Hyperscalers don’t want to be caught dead running Fibre Channel in their data centers,” Singh said.
Satish said Pavilion developed a clustered file system that supports RAID 6 data protection, non-disruptive upgrades, multi-pathing, thin provisioning, snapshots and clones. The array does not require any software to run on host servers.
Founded in 2014, Pavilion Data has just under 60 employees split between the U.S. and India. But the company has not had all smooth sailing so far. A founder and the original CEO Kiran Malwankar left Pavilion early this year. Pavilion also had some layoffs around that time, although it is hiring now.
Singh said another founder and current VP of engineering Sundar Kanthadai is in charge of development, and there has been no change in direction since Malwankar left.
He said Malwankar “left to pursue other opportunities” although Malwankar’s LinkedIn entry describes him as a “free bird.”
“It’s the natural course and evolution of a company,” Singh said. “People leave and new people come in.”
The new funding brings Pavilion Data’s total to $33 million. New investors Korea Investment Partners and DAG Ventures participated along with previous investors Kleiner Perkins Caufield & Byers, Artiman White Space Investments, and SK Telecom.
While Veeam Software uses its VeeamON user conference this week to further its push into the enterprise, Quest Software is making its own attempt to go the same route with its NetVault Backup.
With NetVault Backup 12.0, Quest has made the application more scalable – particularly for virtual machines. NetVault Backup 12.0 can run VMware plug-ins on any available proxy so users can back up VMs with a unified view and scale thousands of VMs. A new heuristic algorithm can load balance backup jobs across clients acting as backup proxies.
NetVault Backup 12.0 also supports application-aware storage array snapshots for the first time, although the only array supported now is Dell EMC SC Series (Compellent). Other enhancements include a single sign-on so users can log onto NetVault Backup by using Active Directory credentials, a push install feature to streamline product updates and installations, a new granular catalog search function and a new widget-based dashboard.
“We’ve decided to up the game with NetVault a little bit,” said Adrian Moir, Quest’s lead technology evangelist. “We wanted to go further into the enterprise space. We wanted to add more scale around protection of VMware, and give a single view of a larger environment by placing proxies under a single point of management. This is also our first entry into array-based snapshot. We’ve built a framework and will expand it, adding other arrays as quickly as we can.”
Also like Veeam, Quest is avoiding integrated systems in this age of converged data protection. Moir said Quest wants NetVault Backup to work with as many backup target options as possible rather than package it on an appliance.
“We’re quite happy to run on anyone’s hardware,” he said. “If people want to use a specific hardware platform, that’s fine with us. We’d rather offer the flexibility rather than give them something that doesn’t match the rest of their infrastructure. An appliance might be right-fitted, but sometimes it doesn’t match everything else. We’d rather be flexible and let them match the rest of their environment.”