Cloud backup vendor Infrascale today closed a $16.3 million Series B funding round and immediately made an investment by acquiring backup deduplication appliance vendor Eversync Solutions.
Infrascale did not disclose the amount of the transaction, but its CEO Ken Shaw said Eversync has millions of dollars in annual revenue and will expand Infrascale’s technology.
Infrascale’s products include Infrascale Backup for physical and virtual servers, Infrascale EndGuard for end point devices, Infrascale FileLocker for file sharing, and SOS Online Backup for consumers and small businesses. The company was known as SOS Online Backup until 2012 but changed its name because the SOS brand was associated with consumer backup, Shaw said. Eversync also goes by a different name than it used originally. It started out as Revinetix.
The Eversync appliances hold from 2TB to 176TB of raw data, and can handle more than 1 PB of usable capacity with dedupe. Shaw said Infrascale will continue to sell Eversync’s current products but will add its own software to create disk-to-disk-to-cloud backup products later this year.
“We [Infrascale] do source-side and target-side dedupe but we haven’t gotten into the appliance space yet,” Shaw said. “This allows us to put a lot of our technology onto their appliances. We’ll do that for physical and virtual appliances.”
All Infrascale applications back up to the cloud, as do the Eversync appliances.
“Cloud backup and integrated appliances are two parts of the backup market that are growing,” Shaw said.
Shaw said Eversync’s 18 employees will join Infrascale. The Eversyc team will remain in Salt Lake City. Infrascale is based in Los Angeles.
Carrick Capital Partners led the funding round with DH Capital participating.
Shaw said Infrascale is profitable and the funding round “is not about sustaining runway. It’s about making long-term strategic additions to our technology.”
EMC CEO and chairman Joe Tucci this week left open the door to remaining as CEO past next February, and slammed it shut on the possibility of selling off VMware.
Tucci spoke Thursday at Sanford C. Bernstein Strategic Decisions 2014 investor conference. Bernstein analyst Toni Sacconaghi introduced Tucci by saying “This is purportedly Joe’s last year as CEO.”
Sacconaghi then asked Tucci about his plans as EMC Federation’s CEO. Tucci has already postponed his retirement twice, and his contract expires in February 2015. After that, plans call for him to remain as chairman and relinquish the CEO job to a person within EMC – likely either EMC storage CEO David Goulden, VMware CEO Pat Gelsinger or Pivotal CEO Paul Maritz.
But Tucci said that plan is far from cast in stone. He said he would stay on if the board –- which he chairs – wants him to.
“What I’ve said to our board is, that’s like a target date,” Tucci said. “First of all, we’re blessed with some great CEOs in the federation today. And if they think the timing is right and they’d like to do it previous to February I’m fine with that. If they think they’d like a little more time, I’m also fine with that. And I’m not talking about years, I’m talking in terms of months. So there is no bright line drawn in the sand that February 6 at two o’clock in the morning … There is a lot of flexibility and if asked, and that’s the way the board is indicating they’d like me to, I would gladly stay on.”
Sacconaghi asked Tucci how he would react to shareholder calls for EMC to sell off all or some of the 80 percent of VMware that it owns. Tucci said EMC Federation, which includes VMware, Pivotal and RSA Security, provides all the pieces for a software-defined data center and is better as a large company.
“It’s better together,” he said. “Collectively, it’s just a lot stronger story. If you look at who is going at this market, these are big companies. These are companies called IBM, these are companies called Cisco. I think if you break it up, you just weaken every part.”
He added, “I don’t have any plans to buy back the 20 percent (of VMware) we don’t own. I don’t have any plans to sell any of the 80 percent we do own.”
Violin Memory sold off its fledgling PCIe flash business to SK hynix for $23 million this week as it tries to dig out of the financial hole it fell into late last year.
The sale gives Violin much-needed cash while allowing it the focus on its all-flash array platform. It was a priority for new CEO Kevin DeNuccio when he took over the company last February, two months after the board fired his predecessor Don Basile following Violin’s rocky start as a public company.
The PCIe sale accompanied layoffs as Violin moved to reduce expenses last quarter. The expense reduction worked, although the reduction and reorganization of Violin’s sales team played a role in a huge revenue drop.
Violin Thursday reported $18.1 million in revenue last quarter, down 35 percent from the previous quarter and 27 percent from last year. Violin also lost $30.1 million in the quarter.
The loss was actually an improvement over the previous quarter when Violin lost $56.5 million. DeNuccio said Violin cut expenses by about $8 million last quarter.
“This will be a transitional year for Violin,” he said on a conference call. “There will be a lot of moving pieces. We made dramatic financial improvements during the quarter.”
Of the $10 million sequential drop in revenue, $4 million came from the fall in PCIe revenue from $5 million to $1 million
DeNuccio did not give any revenue guidance but said he expects it to begin growing in the second half of the year. As for the drop last quarter, he blamed it on the sales reduction and “not a reflection of demand for our products or the flash market in general. It’s the results of changes we made to position the company for long-term success.”
He also said Violin will have a significant new product launch over the next few months.
“It’s been our cash burn rate, not our technology that has caused concern,” he said.
Burn rate remains a concern. Pointing to Violin’s $87 million in cash and recent losses, Sterne Agee financial analyst, Alex Kurtz wrote in a note to clients today: “Without a bounce back in [second half] growth as management has outlined, liquidity concerns would become a significant issue, especially as new competitors enter the market and pricing pressure becomes more acute.”
Surrounded by large and small storage competitors diving into the all-flash array market, Nimble Storage CEO Suresh Vasudevan said his company is at least three years away from taking such a step.
Vasudevan said Nimble’s CASL file system will support an all-flash array, but market demand is not yet there because of pricing dynamics.
He said Nimble meets the requirements for all flash, with features such as scale-out, data reduction and file locking. But he said Nimble’s hybrid systems have enough performance to compete with all-flash arrays without the cost.
“I will say the architecture itself is broad enough to enable us to go towards the flash-only array,” Vasudevan said Thursday on Nimble’s earnings call.
“The more difficult question to answer is when or whether we think it will become necessary. That question entirely revolves around the endurance of flash coupled with the price of flash. Will the price of flash go down without compromising endurance to a point where the economics start to favor an all flash array? At this junction, not even the semiconductor industry will give you a clear answer that says endurance will stay roughly where we need it to be for enterprise flash arrays and price will go down. So that’s the big unknown. What I am sure of is it is not happening in the next three to five years.”
Nimble’s CS200 and CS400 iSCSI arrays all combine flash and spinning disk. The vendor will also launch a higher end system that includes what Nimble calls Adaptive Flash in June.
Nimble is competing well as is, with revenue of $46.5 million last quarter more than doubling over the previous year as its larger competitors saw their revenue shrink. The vendor exceeded its revenue goal and added 450 new customers in the quarter. The forecast for this quarter is for $49 million to $51 million in revenue.
Nimble also continues to lose money while in growth mode (76 employees added last quarter to bring the total to 668). It lost $10 million last quarter and expects to lose between $11 million and $12 million this quarter. Vasudevan said he doesn’t expect to break even for nearly two years.
Despite the losses, Vasudevan said Nimble is moving into larger companies. He said the vendor had 400 deals of more than $100,000 over the last year, twice as many as the previous year. The addition of Fibre Chanel support planned for later this year should also help Nimble move into the enterprise.
Established storage companies are apparently taking Nimble seriously now. Vasudevan said price competition is “more intense” as large vendors fight harder for deals. “I would say that’s the one change versus the large incumbents,” he said when asked if large vendors are getting more aggressive on pricing.
Unitrends today added cloud backup technology to its data protection product line, which already includes physical backup (Unitrends integrated appliances), virtual backup (PHD Virtual) and disaster recovery failover, failback and testing (ReliableDR).
The cloud backup comes from Yuruware, an early stage Australian-based startup that Unitrends acquired today.
Yuruware technology is months away from shipping, but will be built into the ReliableDR software that PHD Virtual acquired from VirtualSharp last year before Unitrends acquired PHD Virtual. ReliableDR allows customers to replicate between sites or into a Unitrends cloud. ReliableDR technology has been integrated into PHD Virtual and Unitrends backup products, and Yuruware technology will extend its capabilities to public clouds such as Amazon Web Services.
“With our backup products today, we have the ability to replicate between two end points,” said Mark Campbell, Unitrends chief strategy and technology officer. “We start running into problems with public clouds when we have to transform a VMware machine into an Amazon machine. We also have problems around the network because VMware has a virtual network and Amazon has its own network [programming] language known as CloudFormation. Yuruware can spin up machines and networks from a local infrastructure into the cloud.”
Campbell said Yuruware has technology similar to Amazon’s pilot light pattern that replicates data that resides outside of a critical database but must be failed over and back to recover from a disaster.
“Yuruware has technology that does that but doesn’t use compute resources [such as Amazon EC2],” he said. “It only uses compute cycles when you want to assemble those objects into an Amazon Machine Instance.”
Unitrends acquired Yuruware out of the Australian incubator NICTA. The startup consists of seven developers and patented intellectual property, which Unitrends plans to expand into full-blown public cloud backup. Campbell said Unitrends will keep the Yuruware team in Australia and add developers.
The roadmap includes expanding Yuruware’s current support for AWS and OpenStack to other public clouds, including Microsoft Azure.
Campbell said Yuruware IP is expected to show up in Unitrends products by the end of this year, with more to come in early 2015.
Cloud service provider dinCloud is taking on Amazon, hoping to lure customers away with its own dinStorage S3 full-service cloud storage that is based on the Amazon S3 open APIs and can be used for S3 compatible applications.
The company recently launched its dinStorage S3 service that charges no transfer fees, which is one of the main “gotchas” of a cloud service. dinCloud also is offering 10 Gigabit Ethernet (GigE) to 40 GigE speeds, dedicated networking with AES256 encryption and multiprotocol label switching (MPLS) connectivity with 15 carriers that include AT&T and Sprint.
dinCloud offers unlimited storage capacity and charges on a per gigabyte, per month basis.
“These guys are going for high-end cloud use cases and going after Amazon’s biggest customers,” said James Bagley, senior analyst at Storage Strategies Now. “It’s a cost play. They are providing lower cost for volume users. These are companies that are well experienced in cloud computing. What struck me is they are really going for it. They have the network and infrastructure wherewithal to pull it off.”
The Los Angeles-based dinBackup began offering a data protection cloud service in November 2012. That service operated on NetApp and was available for NetApp customers looking for backup and disaster recovery. dinCloud has since discontinued the service, all though it still supports some NetApp replication features for customers that want it. dinCloud has switched to white box storage for its primary storage.
“We had provided cloud services to existing NetApp customers (but) they are no longer our primary storage provider so we discontinued the service,” said Mike Chase, dinCloud’s chief executive officer. “In large deals, we will consider it. It’s too costly and it lacks a lot of the cloud firmware that we needed. It’s also hard to encrypt. Encryption is possible on NetApp but it’s very tricky. You have to buy an encryption unit.”
Aside from no transfer fees, dinCloud offers 40 GigE compared to Amazon’s 10 GigE maximum. It charges $1,200 a month for a dedicated host compared to Amazon’s $1,440 a month. dinCloud encrypts all data at rest, while Amazon offers no encryption on Elastic Block Storage (EBS) volumes, a practice dinCloud claims is “just plain stupid” in its marketing presentation.
dinCloud also offers an IP Reputation (IPR) service that keeps a database to track every IP address on the Internet and its reputation based on past activity. This means any IP known to a source of criminal activity is blocked. dinCloud states Amazon is “morally deficient” for not offering it.
“Cloud providers are in a better position to offer it than most customers due to better pricing, engineering talent and global reach,” Chase said.
Bagley said dinCloud’s partnership with colocation provider Equinix makes its service a strong option to Amazon. Equinix has 100 data centers worldwide and “they tend to place them where you have high intersections of telecommunications,” he said.
“These guys are claiming to be one third the cost of Amazon S3,” Bagley said. “They can have a client have its own equipment, cage and firmware and still have all the advantages of bandwidth and throughput and automatic resiliency in the network. That is a powerful argument to say we can shut off AT&T and you won’t even know it. They can switch around without breaking stride.”
Brian Owen today moved into a well-worn – and well-warmed – seat as CEO of X-IO Technologies. Owen becomes the vendor’s ninth CEO in less than 12 years. His job will be to help X-IO find a secure place in the storage world with its ISE storage bricks.
Owen has been CEO of Mapinfo and decalog NV, and before that held executive roles at Computer Associates, Oracle and DEC.
He joined X-IO in January as vice chairman. Now he’s trading places with John Beletic, who goes from X-IO CEO to chairman. Beletic held the CEO post since 2011.
Owen said he has been meeting with customers since January, and is satisfied the vendor’s technology is sound. He said X-IO needs to focus more on potential customers who are heavily virtualized, including on the desktop.
“The product is real and appropriate, but for specific parts of the market,” he said. “We’re not storage for all. We’re not general purpose storage. We’re high performance storage for a certain class of applications. The key things for me are to make sure we refine our focus and go into markets where we’re relative and highly competitive.
“As you move towards highly virtualized environments, we become far more relevant. If you think about a SAN world, there’s controller and all kinds of features built into a controller. In a post-SAN highly virtualized world, you build features into the hypervisor that used to be in the SAN. We are not heavy in features. We count on hypervisor to deliver those features, and we just deliver pure raw high performance and reliable storage.”
Owen said for VDI, “we’re just a screamer. We have a highly capable product there.” Our sweet spot is high performance reliable performance at a tremendous price point. We can deliver all-flash class performance with hybrid.”
One of Owen’s previous companies, MapInfo, went public and decalog was acquired by SunGaard. Owen would not talk about X-IO finances but said, “We’re in growth mode. We had good growth in the first quarter. Our second quarter last year was a phenomenal quarter so we won’t have as much growth in this quarter but we’re still in growth mode.”
He added that X-IO needs fine-tuning more than a massive overhaul. He called ISE and the hybrid flash ISE “Incredibly modern technologies” and the vendor is “introverted and a little techy. We need to change that and be more solutions-oriented in the way we go to market.”
Owen has already promoted David Gustavsson to COO from VP of engineering and Gavin McLaughlin to VP of worldwide marketing from solutions development director. Gustavsson will continue to head engineering, and take over technical marketing, manufacturing and support.
NetApp is having the same problems as the other large storage vendors these days – more data going into the cloud, elongated sales cycles, declines in federal spending and new innovative vendors taking customers from the big guys.
NetApp’s earnings and guidance released Wednesday reflect these struggles. But it also has one problem that its large competitors don’t have. That is IBM. The biggest problem for NetApp these days is its OEM revenue is in free fall. IBM is its biggest OEM partner, selling NetApp’s E Series business acquired from LSI as well as NetApp’s FAS storage under the IBM N-Series brand. IBM has been pushing its own storage over its partners’, and hasn’t been had much success selling either.
IBM’s strategy and struggles are taking its toll on NetApp. NetApp’s OEM business last quarter fell 34 percent from previous year, and is forecasted to drop 40 percent this quarter. That caused NetApp’s overall revenue of $1.65 billion to fall four percent from last year. NetApp’s guidance for this quarter of between $1.42 billion and $1.52 billion fell far below what financial analysts expected.
“IBM reports its storage business down significantly in the last few quarters, over 20 percent [last quarter],” Georgens said. “IBM also has a portfolio of products they can sell that are alternatives to NetApp. We have both those dynamics at play — their ability to sell through has been challenged and our positioning within their portfolio has been challenged.”
Analysts on the call wondered why NetApp continues to sell through OEMs. OEM sales make up seven percent of its overall revenue. Georgens said the vendor is investing less in its OEM relationships, while looking for ways to sell the E Series more through its own channels. He pointed to the EF all-flash array as successful E-Series product sold through the NetApp brand.
Some analysts claim other vendors hurting NetApp, particularly smaller competitors who are considered more innovative. In a note to clients today, Wunderlich Securities analyst Kaushik Roy pointed to flash startup Pure Storage, hybrid storage vendor Nimble Storage, hyperconverged vendor Nutanix, and software startup Actifio among those with disruptive technologies hurting larger vendors.
“While Pure Storage and EMC’s XtremIO all-flash products are gaining traction, NetApp still does not have an all-flash array that has been designed from the ground up,” Roy wrote. “It is well known that all-flash arrays are elongating sales cycles and customers are delaying their purchases of traditional hybrid storage systems. But what may not be well known is that new data structures, new analytics engines, and new compute engines are also stealing market share from traditional storage systems vendors. …
“In our opinion, NetApp needs to acquire technologies from outside to evolve quickly and remain one of the leading technology companies providing IT infrastructure.”
On the earnings call Wednesday, Georgens defended NetApp’s flash portfolio even if its FlashRay home-built all-flash array will not ship until the second half of the year. He said NetApp shipped 18 PB of flash storage last quarter, including EF systems for database acceleration, all-flash FAS arrays and flash caching products.
“I’ll state it flat out. I would not trade the flash portfolio of NetApp with the flash portfolio of any other company,” Georgens said.
However, he did not rule out acquisitions.
“We are open to opportunities that are going to drive the growth of the company,” Georgens said. “In a transitioning market where there are a lot of new technologies and a lot of new alternatives for customers, there are a lot of properties out there to look at. For the right transactions, we’d be very much inclined to [buy a company].”
When I spoke with Veeam Software CEO Ratmire Timashev a few weeks ago, he said the virtual machine data protection specialist is working on beefing up its data availability capabilities. Today, Veeam revealed more details about its next version of Backup & Replication software that is due in the third quarter of this year.
First, Backup & Replication 8 will be part of a new package called Veeam Availability Suite. The Availability Suite combines Backup & Replication with the Veeam One reporting, monitoring and capacity planning application. Veeam will still sell Backup & Replication as a standalone app, but Timashev said the vendor will focus on the suite to stress Veeam’s availability features.
“Instead of talking about backup and recovery, now we’re talking availability,” he said.
Veeam already disclosed one key feature of Backup & Replication 8 – the ability to back up from storage snapshots on NetApp arrays. Veeam Explorer for Storage Snapshots will also allow recovery of virtual machine, guest file and applications from NetApp SnapMirror and SnapVault. Explorer for Storage Snapshots already supports Hewlett-Packard StoreServ and StoreVirtual arrays.
NetApp is one of two major storage vendors Veeam is adding support for in version 8. It also includes EMC Data Domain Boost (DD Boost) integration. That allows Veeam customers using Data Domain backup targets to take advantage of EMC’s dedupe acceleration plug-in.
DD Boost is the first dedupe target acceleration software that Veeam supports, but Tamashiv said the vendor is working with Hewlett-Packard to support its Catalyst client.
Storage vendor support is part of Veeam’s strategy to move beyond its original SMB customer base.
“Most of our customers in the midmarket use Data Domain as a disk target,” Timashev said. “Working with NetApp and EMC positions us stronger in the midmarket and enterprise.”
Other new features in Backup & Replication 8 include built-in WAN acceleration with replication to go with its WAN acceleration for backups, Veeam Explorer for Microsoft SQL and Active Directory, 256-bit AES encryption for tape and over the WAN, and enhanced monitoring and reporting for cloud service providers.
Customers will be able to use the WAN acceleration for replication of backup jobs. Explorer for SQL is similar to Veeam’s current Explorer for Exchange application. Customers will be able to restore individual databases from backups or primary storage.
Timashev said snapshot support and Veeam Explorer allow the vendor to meet its goals of providing 15 minute recovery point objectives (RPOs) and recovery time objectives (RTOs).
“You can take NetApp snapshots every five, 10 or 15 minutes without affecting your production environment,” Timashev said. “We back up these snapshots, and that contributes to our mission of RPO in less than 15 minutes. Veeam Explorer is about fast recovery.”
Availability Suite will be available in the third quarter. Pricing has not been set yet, but it is expected to be slightly higher than Backup & Replication pricing. Backup & Replication’s current per CPU socket prices range from $410 for the Standard Edition to $1,150 for the Enterprise Plus edition.
Gartner storage research director Pushan Rinnen said she agrees with Veeam that greater storage vendor support will help it move into the enterprise. She said Data Domain integration will also strengthen Veeam’s dedupe performance.
“A lot of enterprises have adopted Data Domain as a disk target,” she said. “Data Domain probably has a much better dedupe ratio than Veeam. In some cases, it doesn’t make sense to turn on dedupe on the source side when you can just have the target-side dedupe.”
Rinnen said the replication improvement “allows Veeam to do more failover and failback, helping with DR.”
EMC and IBM recently launched storage products with the term “elastic” in their names. These announcements were significant for the companies and for the IT community in understanding a direction being taken for storage technology.
EMC launched Elastic Cloud Storage that incorporates ViPR 2.0 software onto three models of hardware platforms. The hardware consists of commodity x86 servers, Ethernet networking, and JBODs with high capacity disk drives. ViPR 2.0 brings support for block, object, and Hadoop Distributed File System (HDFS) protocol storage.
IBM’s Elastic Storage is an amalgam of IBM software solutions led by General Parallel File System (GPFS) and all the advanced technology features it provides. The announcement included server-side caching and a future delivery of the SoftLayer Swift open source software. In addition, IBM Research developed StoreLets that allow software to be run at the edge (storage nodes in Swift) to accelerate data selection and reduce the amount of data transferred.
Elastic is not a new description or label for storage. Amazon Elastic Block Storage or EBS has been the primary storage solution used by applications that execute in Amazon’s EC2. Elastic is a new label from more traditional storage vendors, however. These solutions are being associated with cloud storage and extreme scaling – termed hyperscale by EMC and high-scale, high-performance by IBM (note that IBM already uses the term Hyper-Scale with the Hyper-Scale Manager for XIV that consolidates up to 144 XIV systems). Deployment for private/hybrid clouds is mentioned repeatedly in addition to cloud environments deployed by service providers as targets for elastic storage.
But in the world of IT, we like to fit products and solutions into categories. Doing so helps to understand and make comparisons between solutions. Categorization is also a big factor in having discussions where both parties can easily understand what is being discussed.
These elastic storage discussions are a bit more complex and require more of a description of how they are used than just a product discussion. The initial thought about EMC Elastic Cloud Storage is that it is ViPR delivered in a box. That is true but it is more than that. The box concept doesn’t really foster the immediate understanding of what the system will be used for in IT environments. For IBM, Elastic Storage could be seen as GPFS on a server—a solution that has already been offered as SONAS, Storwize V7000 Unified, and the IBM System X GPFS Storage Server. But again, there is more to IBM Elastic Storage than that.
So, we have a new name that may become a category. It is still too early to tell whether that will have real traction with customers or remain a marketing term. Ultimately, it’s about IT solving problems and applying solutions. Storing and retrieving information is the most critical part of any information processing endeavor and involves long-term economic considerations. The term elastic is a new designation for storage systems, and is currently equated to using commodity servers and JBODs with custom software. Attributes about performance, scaling, advanced features, and reliability go along with the systems and are highlighted as differentiating elements by vendors. Elastic may be a new category, but the name is not yet sufficient to understand how it solves the problems for storing and retrieving information.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).