The enterprise storage market is splitting into two areas that are still indistinct but will become a lot less so. It’s important to understand this evolution if you are to make sense of storage vendor product strategy.
Before explaining this tectonic motion, it is necessary to define what I mean by enterprise storage.
When I talk about enterprise storage, I mean storage systems used for enterprises that traditionally have had their own data centers and have certain expectations about storage capabilities and characteristics. This segment includes two dynamics. One is the current operations and critical business applications that must continue without interruption while meeting increasing demands. The other is a transition to IT as a Service methodologies with private cloud implementations.
The traditional IT environment has relied on storage systems to provide resiliency and data protection. And as requirements for availability and business continuity have increased over time, high value capabilities have been designed into storage systems. This has made systems more sophisticated (also more complicated) and more expensive. The capabilities are enumerated in product data sheets.
Private cloud deployments mimic the approaches used in public clouds where storage is more of a commodity and functionality is added in the software for the application level. This means the resiliency and availability does not depend on the underlying storage system, but must be developed into the application or software where it executes.
This second segment leads to opportunity for people designing storage systems. The opportunity is to provide storage platforms for this market that are different than the more sophisticated systems required to meet traditional IT demands. These systems can be as simple as enclosures with devices and control hardware with functionality that routes access.
While private clouds do not eliminate the need for sophisticated storage systems, they represent a set of revolving requirements and product offerings. This is where the “land grab” is on for storage vendors to establish market share. The change presents an opportunity for new storage vendors that can meet the cost and margin demands to enter the market, and allows storage consumers to buy storage that better fits their changing needs.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Violin Memory initiated a turnaround plan under a new CEO a year ago and still has little to show for it.
It’s been a busy year for Violin under CEO Kevin DeNuccio, with several new product rollouts to help the all-flash vendor overhaul its portfolio to remain competitive in a hot area. Still, Violin’s revenues for the fourth quarter of 2014 shrunk from the previous year as did its full year revenues. Losses continue and guidance for this quarter came in light.
The best thing you can say about Violin’s earning results from Thursday was that the vendor lost less money last quarter than it did in the same quarter a year ago. But it lost twice as much in the fourth quarter of 2014 as it did in the third.
Here are the numbers:
• Fourth quarter revenue of $20.5 million was down six percent from last year (excluding revenue from discontinued PCIe flash revenue) and down 10 percent from the previous quarter. It also fell $3.5 million below analysts’ expectations.
• Fourth quarter loss was $46.8 million, compared to third quarter loss of $23.5 million and $56.5 million loss from fourth quarter of 2013.
• Full year revenue of $79 million was down 27 percent from previous year, and full year loss of $108.9 million was an improvement over the loss of $149.8 million from 2013.
• Guidance for this quarter was in the range of $21 million to $24 million, with the high end hitting analysts’ expectations.
DeNuccio said he remains optimistic that he can complete what he calls “one of the greatest technology transitions” he has ever seen. That optimism is based on two factors: Violin’s newly updated Flash Storage Platform (FSP) and the rising wave of flash storage.
“While the fourth quarter was technically challenging, it is important to recognize that we have built a strong foundation in our people, our technology and our financials,” DeNuccio said on Thursday’s call with analysts. “We are not just in a better, stronger place, but we are well positioned at the forefront of one of the greatest technology transitions I’ve witnessed in my career.”
DeNuccio blamed the disappointing fourth quarter on customers holding off on buying until they can test the new FSP systems. He said he expects to increase revenue 10 percent every quarter this year, but gave no timetable for turning a profit.
Violin upgraded its Concerto operating system a month ago, adding features such as lock-level inline deduplication and compression, snapshots, clones, replication, CDP and better management features. The goal is the make Violin’s all-flash arrays better suited for primary storage than for high performance use cases.
Violin executives say the average selling price for the new primary storage arrays will be up to four times that of Violin’s previous platform. CFO Cory Sindelar estimated the new average price will be $500,000 to $1 million range, up from $200,000 to $250,000 for its old systems.
But Violin faces tough competition for those dollars in the flash market, mainly from EMC, IBM and well-funded privately held Pure Storage. None of those vendors are backing off in their dedication to flash, as are any of the other mainstream storage vendors or startup SolidFire. And new flash platforms are still coming into the market, such as SanDisk’s InfiniFlash launched this week.
So that great tech transition is hardly a sure bet.
Storage revenue rebounded in the fourth quarter of 2014, no thanks to the large vendors.
Total storage revenue grew 7.2 percent to nearly $10.6 billion and external (networked) storage increased 3.4 percent to $7.15 billion, according to IDC’s worldwide quarterly disk storage tracker.
For the year, storage spending rose 3.6 percent to $36.2 billion and total capacity increased 43 percent to 99.2 exabytes.
That comes despite a year-over-year decline of nearly seven percent in overall storage spending in the first quarter of 2014, and two quarters of external storage declines during the year.
No. 1 EMC’s revenue growth of 3.3 percent to $2.35 billion decreased its market share from 23.1 percent to 22.2 percent in overall storage from a year ago. No. 2 Hewlett-Packard increased revenue 4.8 percent to $1.46 billion but its share dropped from 14.1 percent to 13.8 percent. No. 3 Dell increased revenue 5.2 percent to $952 million. That put Dell barely above IBM, whose revenue dropped 23.8 percent to $951 million. Dell and IBM each had nine percent market share. No. 5 NetApp’s revenue also declined 3.5 percent and its market share dropped from eight percent to 7.2 percent at $764 million.
So how did the market grow if the combined revenue of the top five vendors declined? Smaller vendors (IDC’s “Others” category) and original design manufacturers (ODMs) who sell directly to hyper-scale data centers led the rally.
Other vendors increased 20.2 percent to $2.74 million and increased share from 23.1 percent to 25.9 percent. ODMs increased revenue by 39.4 percent to $1.36 billion and grew market share from 9.9 percent to 12.8 percent.
In external storage, EMC grew 3.3 percent to $2.35 billion to remain at 32.9 percent market share. No. 2 IBM dropped 7.2 percent year-over-year to $837 million to fall from 13 percent to 11.7 percent share and No. 3 NetApp decreased 3.5 percent to $764 million. No. 4 HP grew 3.3 percent to $689 million and 9.6 percent share. HDS’ 3.5 percent increase to $577 million left it at 8.1 percent share. The revenue from other storage vendors increased 12.3 percent to $1.93 billion and jumped from 24.9 percent to 27 percent market share.
IDC research director Eric Sheppard attributed the fourth-quarter growth to year-end seasonality, demand for mid-range systems with flash capacity and hyper-scale data center storage.
Dell led in total storage capacity shipped during the year at 10.85 exabytes, followed by HP with 9.98 exabytes and EMC with 9.96 exabytes.
Western Digital, which has been collecting flash-related startups over the past few years, today beefed up its archiving portfolio by acquiring object storage vendor Amplidata.
Amplidata will become part of Western Digital’s HGST subsidiary. The deal is expected to close this month. Western Digital did not disclose the acquisition price.
Amplidata claims its Himalaya object storage software can handle zettabytes of data and trillions of stored objects under a single global namespace. The software was called AmpliStore until Amplidata re-branded it as Himalaya in June 2014.
Western Digital invested $10 million in Amplidata last year, and Amplidata was a joint development partner in HGST’s Active Archive platform. HGST sees Active Archives as large repositories of recently created data that needs to be accessed at least occasionally, unlike cold archived data that is rarely accessed.
Amplidata also has an OEM deal with storage systems vendor Quantum, which uses Amplidata in its Lattus and StorNext archiving products. It appears that relationship will continue, as Western Digital’s press release today quoted Quantum CEO Jon Gacek saying he was “excited about the acquisition” and is looking forward to new partnership opportunities.
Until today, HGST has concentrated more on storage performance products than archiving with its acquisitions. It bought all-solid state drive array vendor Skyera in December 2014, after picking up NAND controller vendor sTec, PCIe-based flash card vendor Virident Systems and application acceleration vendor VeloBit.
Who’s next for Western Digital/HGST? Keep an eye on Avere Systems. Western Digital led a $20 million investment round in Avere last July, Avere is also an Active Archive partner and its NAS acceleration appliances incorporate flash.
Nimble Storage exceeded expectations with $68.3 million in revenue in its fiscal fourth quarter, helped by a new wave of Fibre Channel-based arrays that factored into about 10% of the company’s bookings.
But, the San Jose, California-based storage startup’s stock fell after Thursday’s fiscal 2015-ending earnings call, as financial analysts ratcheted down estimates in response to Nimble’s fiscal 2016 first-quarter guidance. Nimble predicted that revenue will be roughly flat, at $68 million to $70 million, and operating losses will range from $9 million to $10 million.
“Our guidance accounts for the seasonality effects of Q1,” said Nimble CFO Anup Singh. He noted the first quarter is typically the industry’s slowest.
Singh insisted that Nimble remains on target to break even by the end of the fiscal year, on Jan. 31, 2016. He said the company expects similar operating losses in the first and second quarters before improvement in Q3 and breakeven in Q4. He confirmed the nonlinear progression was purely a function of operating expenses – not slowing revenue or gross margin – in response to a question from a Wells Fargo analyst.
Nimble specializes in hybrid arrays combining solid-state and hard-disk drives. During the past year, the company introduced new capabilities such as Adaptive Flash, InfoSight performance monitoring, scale-out clustering, Triple Parity RAID and Fibre Channel (FC) storage networking.
The $63.8 million in revenue for fiscal 2015 marked a 15.5% revenue increase over the fiscal 2015 third-quarter and 64% growth over the fiscal 2014 fourth-quarter, when Nimble began operating as a public company. The $228 million in fiscal 2015 revenue was 81% higher than the prior fiscal year’s $126 million.
The average selling price per deal hit record levels in the fourth quarter, with a significantly higher share of bookings exceeding $100,000 and $250,000, according to Nimble CEO Suresh Vasudevan.
Vasudevan said Nimble added 650 new customers during the fourth quarter and had an installed base approaching 5,000 by the end of fiscal 2015. He claimed 83 Fibre Channel customers were on board by Jan. 31, after the company added support for the storage networking technology in November.
“As we had anticipated, Fibre Channel is helping to increase deal sizes and is helping drive large enterprise penetration,” said Vasudevan. He said the pace of FC adoption exceeded expectations, and more than 70% of the FC customers were net new for Nimble.
Vasudevan cited an example of an unnamed Fortune 100 customer that spends tens of millions of dollars on storage and had been using technology from an industry-leading legacy vendor. “Without Fibre Channel, we would not even have made the consideration list,” he said.
Another area of customer growth for Nimble were the SmartStack pre-validated reference architectures for combining technology from Cisco and software vendors such as Microsoft, VMware, Citrix and Oracle into a converged infrastructure. Vasudevan said the SmartStack customer base grew threefold between fiscal 2014 and fiscal 2015.
“The frequency with which we are competing against hyper-converged vendors has increased . . . but it still represents a very small single-digit percentage of our total frequency,” said Vasudevan. “More often than that what we tend to see is competition against the likes of FlexPod or VCE.”
Vasudevan said Nimble’s Adaptive Flash, which can “dial the ratio of flash from very low to very high levels,” allowed the company to compete in twice as many all-flash array environments in the fourth quarter than in the prior quarter.
“Our win rates against those were higher by a decent margin than what we had seen ever before,” he said.
Vasudevan cited the company’s InfoSight-led support as a driver of repeat deployments. He said four major global companies addressed Nimble’s sales team at a kickoff event this month and told them InfoSight was “game-changing” in their day-to-day operational management of storage.
Several financial analysts asked about Nimble’s product roadmap during the earnings call. Vasudevan did not provide specifics other than to say the company planned to focus on differentiation through technologies such as its Adaptive Flash, file system and InfoSight and through integration efforts with alliance partners.
Joe Wittine, a senior equity research analyst at Longbow Research, said he heard deduplication is in the works and asked about the level of customer demand for the data-reduction technology. Vasudevan said dedupe can help in virtual desktop infrastructure (VDI) environments where there are hundreds of desktop images that look the same. He said one option to reduce space is dedupe and another is zero-copy cloning, which Nimble supports.
“In some of those modes, deduplication can help you optimize cost even more. I won’t comment specifically on timing,” said Vasudevan.
IBM has finally hopped onto the bandwagon with solid-state array vendors that use multilevel cell (MLC) NAND technology and guarantee the read/write endurance of flash modules. Those changes came after lots of behind-the-scenes work.
Engineers from the company’s Texas Memory Systems acquisition and IBM researchers from Zurich and other locations combined to develop the new FlashCore technology at the heart of the FlashSystem V9000 and 900 arrays.
As only a small number of vendors do, IBM buys NAND chips and makes the modules that go into its all-flash arrays (AFAs). But last year IBM was the only AFA vendor to make flash drives using enterprise MLC flash (eMLC). In the summer, IBM Fellow and CTO Andrew Walls said eMLC was as an important part of IBM’s strategy, bringing a 10x improvement in endurance over typical MLC-based solid-state drives (SSDs).
Last week, Walls said, “Our design goal with the FlashCore technology, with our advanced flash management, was to take endurance out of the equation. You simply run it and don’t worry about it.”
Flash can wear out over time due to the program/erase process for writing data to NAND chips. All the bits in a flash block need to be erased before a write takes places. The program/erase process eventually breaks down the oxide layer that traps electrons at floating gate transistors, leading to errors. The industry’s wear-out figures for eMLC flash are about 30,000 program/erase cycles and, for MLC, 10,000 or even as few as 3,000 cycles.
But anecdotal evidence is mounting that flash is not wearing out as once feared.
“It’s not happening at all,” asserted Gartner Research VP Joe Unsworth, speaking at IBM’s FlashSystem launch event last week. “We see very few failures of drives period, and of course, let’s not forget SSDs fail predictably. So, you can see as this occurs. Right now, we’re seeing about every six months, 2% to 4% flash wear across the solid-state array. That’s not much at all.”
Plenty of vendors have worked hard to improve the endurance of flash. Here’s a glimpse of what Walls said IBM did to improve the endurance of its MicroLatency flash modules without sacrificing performance or low latency.
—Collaborated with Micron, which provided the interface to the “inner workings of the flash,” enabling IBM to monitor and control the flash and change read thresholds.
—Set up a characterization lab in Poughkeepsie, New York, to test flash devices and observe how flash blocks behave as engineers tried different error correcting code (ECC) and garbage collection algorithms and other techniques.
—Developed an ECC algorithm that Walls said allows IBM to correct a high bit error rate and read data only once. “That is a significant step forward. It also allows us to stay in FPGA technology, and it is an algorithm that allows us to get extremely good endurance,” he said.
—Developed health binning and heat segregation technology instead of using the symmetric wear-leveling algorithms that Walls said ensure all cells handle about the same amount of writes in typical SSDs.
“When you do that, unfortunately the endurance of your flash is now going to be determined by your weakest cells, because they’re going to get punished the same as all the rest, and you will wear out depending on that,” he said.
Walls compared IBM’s approach to pack mules in the Grand Canyon carrying loads of 50 pounds, 100 pounds or 200 pounds to enable them to do the job with half the number of animals.
“We monitor the health and assess the health of each flash block as they age, and we determine and grade each of the flash blocks. The flash blocks that are the healthiest [are] going to get the hottest data,” Walls said. He said, as flash blocks age and get weaker, they handle colder data.
“That technique alone has given us a 57% improvement in endurance in most typical workloads,” he said.
Walls claimed that IBM reduced write amplification by up to 45% by grouping like heat levels.
One result of IBM’s efforts is a new FlashSystem Tier 1 Guarantee, which includes “MicroLatency” performance and read/write endurance for up to seven years as long the system is under warranty or maintenance.
That brings IBM up to par with other all-flash array vendors. In June 2014, when SearchStorage.com published a guide to 15 all-flash arrays, IBM was the only vendor that would not replace flash modules if they wore out before the warranty expired. Dell’s Compellent all-flash model noted a caveat that the SSDs had to be within the “rated life” period. None of the other vendors mentioned any restrictions, although the length of their guarantees varied.
Dell Ventures is the newest investor in object-based storage startup Exablox after leading a $16 million funding round this week.
Dell joined previous Exablox funders DCM Ventures, Norwest Venture Partners and US Venture Partners in the round, bringing the startup’s total funding to $38.5 million.
Exablox OneBlox appliances consist of object storage integrated with a distributed file system, and serve as primary storage or backup.
Exablox CEO Doug Brockett said the funding round does not include any product or marketing partnerships between his company and Dell. But it does show that Dell sees value in Exablox technology.
“Dell certainly understands this market that we sell into extremely well, and Dell understands storage extremely well,” he said. “There is no agreement in place to do anything [with ExaBlox technology] yet. Over time we can see what happens. We both see the same market opportunities out there.”
Brockett said OneBlox has sold in the United States and Canada since launching in 2013. Exablox will use the funding to expand its international sales, including channel recruitment. Brockett said the company has a “rich roadmap” for product upgrades in 2015 and will add engineering with the funding, but that expansion “will pale in comparison to what we’re doing and the sales and marketing side.”
Exablox’s upgrade strategy has been to make changes in software that customers install on the same hardware that they bought initially.
“We’re software-defined storage. We don’t ask people to adopt new hardware,” Brockett said.
Dell plans to ship the second version of its XC Series of hyper-converged appliances next week, just four months after getting the first wave of products out the door in November.
The XC Series appliances bundle Dell’s hardware, Nutanix’s software, and VMware’s ESXi or Microsoft’s Hyper-V hypervisor technology. Version 2.0 uses the 4.1 release of the Nutanix software, which aggregates and manages the clustered server and direct-attached storage resources.
Version 2.0 of the XC Series appliances will be the first to run Dell’s 13th generation PowerEdge server technology with the latest Intel Xeon processor E5-2600 v3 product family. Other differences between the first and second editions include flexible options for the numbers and capacities of solid-state drives (SSDs) and hard-disk drives (HDDs), processor cores and speeds, and DIMM and memory configurations.
With the second wave of XC products, Dell also plans to introduce next week a new 1U model based on Dell’s PowerEdge R630 server technology. The more compact XC630 will support more virtual desktop users in half the rack space of Dell’s original XC720xd at a lower cost, according to Travis Vigil, executive director of product management for Dell Storage.
Dell will also release two new higher density 2U appliances based on its PowerEdge R730xd servers. The XC730xd-12 has a dozen 3.5-inch drive slots and options for two to four 200 GB, 400 GB or 800 GB SSDs and four to eight 4 TB HDDs. The XC730xd-24 has two dozen 2.5-inch drive slots and can handle two to four SSDs and a minimum of four and maximum of 20 HDDs of 1 TB capacity.
“With the XC730xd offering 60% more storage than the predecessor version, we think we’ll probably see more interest in big data type workloads,” said Vigil.
Dell product literature claims the XC730xd-12 can run storage-heavy Microsoft Exchange and SharePoint, data warehouse and big data workloads, and the XC730xd-24 is suitable for performance-intensive Microsoft SQL Server and Oracle OLTP workloads.
The XC630’s spec sheet says the product targets compute and performance-intensive virtual desktop infrastructure (VDI), test and development, private cloud and virtual server workloads. The 1U XC630-10 has 10 2.5-inch drive slots and can hold two to four SSDs at raw capacities of either 200 GB, 400 GB or 800 GB and four to eight 1 TB HDDs.
Vigil said there is no limit on the number of systems that can be clustered. He said a typical configuration ranges from three to 10 units, but he noted that Nutanix customers have clustered to upwards of 100 units.
All XC Series appliances are currently available only as hybrid storage configurations, but Vigil said plans call for an all-flash storage option this year. Dell also plans to add support for the open source KVM hypervisor this year, according to Vigil.
List pricing for the 1U XC630 starts at about $32,000, including the appliance, the Nutanix software, two 200 GB SATA SSD, four 1 TB HDDs and a one-year Dell ProSupport service contract. The starting list price for the 2U XC730xd is about $45,000 with two 200 GB SATA SSDs and four 4 TB HDDs and a one-year Dell support contract, according to a Dell spokesman.
“The official growth rate for these hyper-converged solutions, like we have with the XC Series, is an order of magnitude greater than what we’re seeing with traditional data center hardware spending,” said Vigil. “So, we’re very optimistic about the future here. We’re happy with the demand that we’ve seen so far.”
Dell also sells an appliance for VMware’s EVO:RAIL, which began shipping last fall. Dell’s software-defined storage offerings also include reference architectures for software from vendors such as Microsoft, Nexenta and Red Hat with its servers and storage.
Networking and wireless chip maker Avago Technologies is pushing deeper into enterprise storage with its planned acquisition of storage networking vendor Emulex.
Avago said Wednesday evening that it has agreed to acquire Emulex for $606 million – considerably less than Broadcom offered for Emulex in a hostile takeover attempt in 2009.
Like its main rival QLogic, Emulex sells Fibre Channel host bus adapters and Ethernet storage connectivity products through major storage vendors. The deal comes 14 months after Avago splashed $6.6 billion on storage component firm LSI. Avago then sold LSI’s solid-state drive controller and PCI Express (PCIe) flash card business to Seagate for $540 million in May 2014, but retained its SAS controller and PCIe products.
“Emulex is complementary to Avago’s enterprise storage businesses, and aligns very well with the Avago business model,” Avago CEO Hock Tan said of the deal.
Tan said Emulex’s storage OEM partners are among the same vendors who sell Avago’s SAS, RAID, and PCIe Express switching products. EMC and Hewlett-Packard are Emulex’s largest storage partners.
“We expect this transaction to allow us to offer one of the broadest suites of silicon and software storage solutions to the enterprise and data center markets,’ Tan added.
Tan said he expects the deal to close by the end of June. Emulex will operate as a business unit inside Avago’s enterprise storage segment.
Tan projected that Emulex would add about $250 million to $300 million in annual revenue over the first year, which is below previous expectations. Emulex reported $111 million in revenue last quarter, and analysts expected it to generate about $400 million in 2015 as a standalone company.
Still, Tan said he sees strong interest in Fibre Channel and Fibre Channel over Ethernet revenue for storage. “We see that this Fiber Channel business is really a very sustainable stable business,” he said. “It’s a kind of business where we see a lot of barriers to entry, obviously. And we see a very unique technology, which is very hard to replicate because of all the criteria that fits our business model. So it’s a logical and strategic next stop for us to add Fibre Channel and Fibre Channel over Ethernet into our suite of component solutions and software.”
Emulex fought off a takeover attempt by Broadcom in 2009, enacting a poison pill to keep the networking company from buying a controlling interest. Emulex said Broadcom’s offers undervalued Emulex’s shares, but that offer looks good now. Broadcom’s opening offer was worth $764 million, and it increased to $925 million before walking away.
Emulex CEO Jeff Benck, who was not with Emulex when Broadcom made its move, called the Avago deal “a great opportunity for Emulex” in a press release.
Avago will pay $8 per share for Emulex. The HBA vendor’s stock rose from $6.36 at the close of Wednesday to $8.03 at today’s opening. Still, at least 10 law firms have already said they will investigate whether the Emulex board got the best price for the company.
Pivot3, the hyper-convergence vendor that concentrates on the surveillance and virtual desktop infrastructure (VDI) markets, picked up $45 million in funding this week to expand sales and marketing of its vStac appliances.
Pivot3 CEO Ron Nash said the company will rapidly expand its workforce, which stands at 92 today. He said the goal is to hire 17 people in each of the next two quarters, mostly in sales and marketing with a few developers. He said the company is looking to double its growth rate in the security market this year.
Nash said Pivot3 has more than 1,600 customer systems installed. “The first product most people buy from us is surveillance,” he said. “We started going into broader applications because after we installed the first system for video surveillance, they ask us, ‘Can we run Microsoft Exchange or something else on it?’ We say, ‘Yeah, sure.’”
Pivot3 began selling what it called “serverless computing” appliances in 2008, moving the application server into the storage node along with Xen hypervisors. That was hyper-convergence, although no one used that term at the time. The original use case was storage for surveillance video, and then the vendor launched VDI appliances in 2011.
Pivot3 originally sold software only on appliances, but recently began selling its software separately for customers who want to install it on blades for VDI.
The hyper-converged market is taking off, with VMware making a splash with its Virtual SAN (VSAN) software and its large partners such as Dell, Hewlett-Packard, EMC and NetApp who sell VSAN through VMware’s EVO:RAIL program. Dedicated hyper-convergence startups Nutanix, SimpliVity, Scale Computing, Maxta Software and Nimboxx make it a crowded market.
“Hyper-converged infrastructure is a nice topic of conversation among people, there’s a lot of activity,” Nash said. “There’s a huge wave.”
Nash said the entrance of VMware and the other large vendors could help Pivot3 by bringing attention to the market. Now it’s up to Pivot3’s expanded sales team to convince people that Pivot3 does hyper-convergence better than the others.
“The whole purpose of hyper-convergence is to give you better capabilities and a better price,” he said.
New investor Argonaut Private Equity led the round, which included S3 Ventures, InterWest Partners, Mesirow Financial Private Equity and the Wilson Sonsini Goodrich & Rosati. Pivot3 received $12 million in funding last August, and has around $145 million in total funding.