Veeam will add snapshot integration with EMC VNX and VNXe storage arrays when it rolls out its Availability Suite 9.
Virtual backup specialist Veeam began integrating with storage arrays for more efficient backup and recovery of snapshots in 2013 with support for Hewlett-Packard 3PAR and StoreVirtual arrays. It added support for NetApp arrays in version 8 of its backup and recovery suite last year.
The integration allows Veeam Backup from Storage Snapshots to create snapshots faster by offloading the process to the supported arrays.
“One of our strategies is to integrate with primary vendors and their snapshot capabilities,” said Doug Hazelman, Veeam VP of product strategy. “This provides two capabilities. First, you get faster, more efficient snapshots because we can leverage the power of a primary storage array. The second is on the recovery side, we can see existing snapshots on the array and we can provide recovery operations from the array.
Veeam Availability Suite’s Enterprise Plus Edition is required for Backup from Storage Snapshots.
Availability Suite 9 is months away from release, but, as usual, Veeam will pre-announce features in advance.
“We’ll have rolling thunder through the next couple of months talking about new features,” Hazelman said.
The Palo Alto, California company has raised a total of $51 million since it launched 15 months ago.
“Our goal is to invest heavily in sales and marketing,” said Bipul Sinha, Rubrik’s CEO and co-founder. “We (also) have a lot of projects planned to rapidly expand the platform.”
Rubrik’s r300 Series Hybrid Cloud appliance is a 2U device that contains up to four x86 nodes and includes the Rubrik Converged Data Management Platform (RCDM) to manage backup across on-premise data centers and public cloud storage. It also is configured with Rubrik’s Cloud Scale File System.
The appliances come in two models. The 330 contains nine hard disk drives and three solid state drives (SSDs) and supports up to 200 virtual machines The 340 contains 12 hard disk drives and four SSDs and supports up to 300 virtual machines. Rubrik supports VMware, Microsoft Hyper-V and KVM hypervisors. It also supports Amazon Web Services (AWS).
The systems are designed for speedy data recovery and long-term data retention management in the public cloud. They include a Google-like search capability for predictive search results based on data stored in both a private and public cloud. Data is fully indexed for search. Rubrik also can be turned into a storage endpoint so data and storage can be provisioned to developers during research and development projects.
Rubrik has 40 employees with two thirds in engineering and the rest in sales and marketing. The company has an all-channel business model and it claims to have 20 customers in its early access program. The latest funding round was led by Greylock Partners with participation from Lightspeed Venture Partners and existing angel investors.
Now that Nimble Storage is growing revenue from its Fibre Channel support, it’s time for Nimble executives to think about what they will add next.
An all-flash array would be one logical addition. Nimble has supported flash in a hybrid set up since the start, and last year added an all-flash disk shelf that plugs into a Nimble controller to give customers extra performance.
Nimble has maintained that its hybrid arrays perform at or close to the levels of all-flash but at a lower cost. But on Nimble’s earnings report call Tuesday, CEO Suresh Vasudevan admitted that customers sometimes pick small all-flash arrays when they have one application that needs a performance bump. Vasudevan said Nimble’s architecture can support all flash, but stopped short of committing to an all-flash array.
“The underlying [Nimble] Adaptive Flash platform allows us to not just deploy the mix of flash and disk in a storage system but over time it can also be deployed in an all-flash configuration,” Vasudevan said when asked if Nimble needs an all-flash platform. “That is something our platform certainly allows us to do. I won’t be much more specific than that on how we are thinking about timelines.”
NAS is another missing piece for Nimble, which supported iSCSI block storage from the start and added Fibre Channel last year.
Vasudevan said customers have requested file protocols on Nimble arrays, but that is not on the short-term roadmap. He said customers do store files on Nimble systems now, and others use Nimble as the back end storage for traditional file servers. “We see the ability to add protocol support for file apps over time as a growth opportunity,” he said, adding it was “not something that’s a near-term driver in a big way for us.”
FC support helped Nimble increase its revenue year-over-year last quarter while the revenue of the large storage vendors decreased. Nimble reported $73.1 million for the quarter, ahead of its previous forecast of $68 million to $70 million. Nimble’s revenue grew 53 percent over the same quarter last year and four percent over the fourth quarter of 2014.
Nimble said 14 percent of its bookings last quarter included FC, and 70 percent of FC customers were new to Nimble.
FC also helped bring Nimble into more large transactions, as deals of more than $250,000 quadrupled from last year.
Nimble cut its losses in the quarter to $7.9 million from $10 million last year, but is unlikely to break even until the fourth quarter of 2016.
NetApp is selling AltaVault as a physical product on FAS hardware, a virtual appliance for VMware ESX and Micrsoft Hyper-V hypervisors and as an appliance in the Amazon Web Services (AWS) and Microsoft Azure public clouds. The systems integrate with NetApp SnapProtect and most common backup software applications, and can back up to a public cloud or private clouds build on object storage such as NetApp StorageGrid Webscale, EMC Atmos, Cleversafe, OpenStack Swift and Cloudian HyperStore.
The AVA400 physical appliance uses the NetApp FAS8000 controller head. It supports 12 to 72 hard disk drives for between 32 TB and 192 TB of usable local storage and 960 TB of cloud capacity. A 24-drive expansion shelf can be added. NetApp claims a 5.5 TB per hour maximum ingest rate. The AVA400 can run in cold storage mode with 32 TB of usable local storage, 10 PB of cloud capacity and a 350 GB per hour ingest rate.
NetApp plans to add an AVA800 physical appliance in the second half of 2015 that supports 96 drives, up to 384 TB of usable local storage, 1.92 PB of cloud capacity and an 8 TB per hour ingest rate.
The AVA-8, AVA-16, and AVA-32 virtual appliances on ESX and Hyper-V support from 8 TB to 32 TB of local disk capacity, 4.8 PB of cloud storage in backup mode and 10 PB of cloud capacity in cold storage model. The virtual appliances have a 2.6 TB per hour maximum ingest rate.
The AVA-c4 for Azure and AVA-c4, AVA-c8 and AVA-16 for AWS can back up cloud-based workloads or serve as a second or third site for disaster recovery.
The appliances dedupe and encrypt data. Running on the 10u physical appliances brings a significant bump in performance form the smaller appliances that Riverbed used.
Phil Brotherton, NetApp’s VP of cloud solutions said NetApp will continue to support the SteelStore appliances in the field but today’s launch signals the integration process is complete.
“With this release, everything about the product goes into the NetApp supply chain,” he said.
Brotherton said AltaVault is a big part of NetApp’s hybrid cloud storage strategy.
“It is important to integrate on-premise and off-premise data management into a format that customers can use across the board,” he said. “This is a critical component of our data fabric vision.”
Like many data protection vendors, Commvault added file sync and share capabilities within the last few years.
Today, Commvault expanded its Edge file sync platform with Edge Drive, which lets users set up a personal folder to share files in the cloud.
Edge Drive is part of Commvault’s Endpoint Data Protection Solution Set (EFSS) that was launched as a separate cloud offering about six months ago.
The Edge Drive allows users to sync data across mobile devices to access and collaborate on files and other documents. It also gives IT administrators the ability to control and secure corporate data, which had been a growing concern as employees started adopting consumer-based sync-and-share product on their own.
Steve Luong, Commvault’s senior manager of product marketing, said Edge Drive is different than most of the other sync-and-share products on the market because data is stored in the Commvault content repository. There, it protected via EFSS.
“There are all kinds of sync-and-share vendors out there,” Luong said. “Ours is different because we bring the data into our content store where it can be searched for compliance reasons on an enterprise scale. The content store is a collective storage place and it has a front-end web interface.
The Endpoint Data Protection solution was announced in January and is designed to protect data on mobile devices by backing up laptops, desktops, smart phones and tablets.
“IT is losing control of that data. This is why we extended our solution to include this capability,” Luong said.
Commvault changed its packaging strategy in late 2014 by breaking up its flagship Simpana data protections software for customers who have specific data protection needs but don’t require the entire platform. Commvault still sells the entire Simpana software platform intact for customers who require all of its capabilities but the smaller bundles reduce complexity and cost.
Earlier this month, Commvault launched four customized software modules and add-ons based on its flagship product. The packages are tailored for cloud deployments and include Commvault Cloud Disaster Recovery, Commvault Cloud Gateway, Commvault Cloud Replication and Commvault Cloud Development and Test. The Cloud DR package also protects virtual machines in the cloud and enables restores to virtual machines.
I recently attended EMC World and the IBM Edge conferences in Las Vegas, and was struck by how these shows have changed the industry. Not too long ago, industry-wide shows such as Storage Networking World (SNW) were the primary events to meet with vendors for a broad view of each vendor’s product focus and strategy. Now, vendors hold their own conferences — often in Las Vegas — that are lavish productions.
The conferences are major investments for the vendors, although partners pay to exhibit in the Expo Hall and attendees may have to pay to defray some of the cost.
Attendees include customers ranging from executives that may be escorted by their vendor sales reps to technical engineers who make things work and are looking for product information. The vendor’s resellers and distributors also attend. Analysts are invited to listen to the general keynotes and then get briefed by executives on the company’s future strategy and current successes. Most vendors give analysts one-on-one sessions with the executives to get a deeper understanding of direction and motivation. I value these greatly because they help me explain vendor directions to our IT clients.
The media also attends, often with a heavy influx of foreign press. There is usually a separate session for the press and it focuses more on current technologies than the future plans that analysts get briefed on.
The vendor conference is a chance to hear what is new from a product standpoint. Vendors are increasingly adjusting product announcement schedules to coincide with their conferences. Some even time product launches to come at the same time as a competitor’s conference. The new information gets attention from analysts, press, and customers focused on that conference.
But the real value of the conference is not what products the vendors present or how they are helping to solve all the world’s problems. It really centers on the vendor’s strategic direction. Sometimes that direction gets muddled in big picture marketing talk, but there is value in sorting that out.
Post-conference, analysts write product reviews and the press reports on what is new or different. Customers may add new vendor products to their evaluations or learn new ways to optimize their current systems. Attendees may also need to reduce caloric intake after eating and drinking too much at the conference. In the case of Las Vegas, the lengthy walking requirements do not compensate for the other indulgences.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Considering the poor numbers coming from storage array vendor’s recent earnings, it’s no surprise Brocade Thursday said its storage switch revenue declined from last year.
EMC, NetApp, IBM and Hewlett-Packard all reported declines in storage revenue last quarter. Brocade sells SAN and Ethernet networking switches, with its storage switch revenue usually reflecting industry demand. Last quarter, Brocade’s storage switch product revenue of $316 million was down two percent from last year and its overall revenue of $546.5 million fell about $5 million below expectations.
Brocade blamed the shortage on disappointing sales by OEM partner Lenovo with low-end switches due to a rocky transition with products Lenovo acquired from IBM last year. Brocade’s Fibre Channel (FC) director switch sales actually increased nine percent to $139.5 million, while smaller FC switches fell 5.5 percent to $145.4 million and embedded switches in servers dropped from $37.6 million last year to $28.7 million. The embedded switches are the products used by the Lenovo servers.
Brocade forecasted another drop in SAN switches this quarter, ranging in a decline from two percent to six percent.
Brocade has been pushing IP storage switches for workloads switching from FC to Ethernet SANs, but CEO Lloyd Carney said he expects FC to get a bump from flash arrays. He also sees high performance applications remaining on FC.
“There are certain workloads that are best suited for Fibre Channel and then they’re designed around Fibre Channel,” he said. “There are certain workloads that don’t like the Ethernet-based latency, so they’re going to be biased towards Fibre Channel.”
Carney also pointed out that part of the declining storage array revenues are due to falling disk prices, while demand for capacity remains high.
“What you pay for a terabyte of storage today is a fraction of what you paid just two years ago,” he said. “But the actual terabytes of storage going out the door is growing at a really good clip still.”
HP Thursday joined the list of array vendors to report declines from last year. HP’s storage revenue of $740 million last quarter fell eight percent from a year ago, compared to two percent declines by IBM and NetApp and a five percent drop by EMC.
On the plus side, HP said its 3PAR, StoreOnce and StoreAll brands improved five percent to $356 million and made up 48 percent of its storage revenue. On the down side, HP’s “traditional” storage (EVA, MSA and tape) fell 18 percent to $384 million.
NetApp’s earnings report today disappointed investors as the vendor missed its previous revenue forecast, and it issued lower than expected guidance for this quarter. NetApp today also revealed plans to slash around 500 employees.
NetApp’s revenue of $1.54 billion for last quarter came in below its previous guidance range. Both its quarterly revenues and $6.12 billion revenue for its fiscal 2015 year that ended last quarter were down year-over-year.
For this quarter, NetApp forecasted revenue of $1.27 billion to $1.375 billion compared to consensus Wall Street expectation of $1.46 billion.
“We are not pleased with our results,” is how NetApp CEO Tom Georgens opened the earnings call with analysts.
The problem, according to Georgens, is that customers were waiting for Clustered Data OnTap to reach feature parity with legacy the legacy Data OnTap operating system. That happened when Clustered DataOnTap 8.3 became available in late 2014. But upgrading is a complex process, so large customers have held off and channel partners who have not invested in sufficient Clustered OnTrap training have held back smaller customers.
Georgens said waiting for Clustered Data OnTap to catch up with Data OnTap has hurt the vendor’s ability to attract new customers. It has also hurt sales of NetApp FAS arrays to existing customers who are holding on to old systems until they are ready to upgrade to Clustered OnTap.
“It is clear that we underestimated the impact that the transition to Clustered OnTap has had on our pipeline,” Georgens said. “Clustered OnTap is a re-architected and modernized version of OnTap. Customers have to upgrade existing storage management policies an migrate their data. Many large installed customers have resisted upgrading until feature parity was achieved with Data OnTap. The inhibitors to upgrade have now been mitigated with 8.3, and our largest customers see a path to upgrade to Clustered OnTap.”
The problem won’t go away so soon. NetApp’s poor guidance reflects that. Georgens said NetApp will beef up its sales force and its work with the channel to prepare to upgrade customers. But he said the next two quarters will be rough during a transition period. Georgens and CFO Nick Noviello said they expect NetApp to return to normal growth in the second half of its fiscal 2016 year.
NetApp today notified employees that it will lay off approximately 500 employees, and confirmed those plans in an SEC filing. The NetApp execs didn’t mention it on the call, but perhaps that is what Noviello was referring to when he talked about “a disruption related to re-tooling aspects of our business in the first half of [fiscal year] 2016.”
He didn’t explain how the layoffs might affect plans to increase the NetApp sales force.
Drobo is under new ownership again.
Founder Geoff Barrall is spinning low-end NAS vendor Drobo out of Connected Data, marking the third major management shift in Drobo’s 10-year-history. A group headed by former BlueFin Technologies CEO Mihir Shah will buy Drobo, three years after Barrall bought back the company he started in 2005 and left in 2009.
After leaving Drobo following a dispute with investors about the startup’s strategy, Barrall and other Drobo executives started file sharing vendor Connected Data in late 2011. Barrall then acquired Drobo in 2013, merging it with Connected Data.
Barall remains CEO of Connected Data.
Shah had been CEO of IT service provider and consultant BlueFin since last August. He previous served as managing director of corporate development and strategy at Fibre Channel switch vendor Brocade from 2010 to 2014. His background is in finances and mergers/acquisitions. He also worked in corporate development at IBM, sat on the board of InMage Systems and worked for several venture capitalist firms.
Barrall said he thought Connected Data and Drobo could fit under one roof when he re-acquired Drobo, but he discovered they are best run separately. He will remain on Drobo’s board.
“It became apparent they really are two different companies,” he said. “They have two different manifest destinies, two different brands and two different markets. So we’re splitting into two companies.”
Barrall said he concentrated on cutting costs at Drobo while adding product such as the B1200i hybrid storage array. He said Drobo has become profitable, while Connected Data still operates at a loss but is off to a good start with its Transporter enterprise replication product.
“Expenses were very high when we acquired Drobo,” he said. “We got the operating costs of the business in line.”
He said Connected Data has about 40 employees and Drobo has 24, with few working across both. New Drobo CEO Shah said he expects to add to its head count, especially in marketing.
Shah said he is impressed that Drobo’s storage can be used by non-technical people, and plans to add cloud integration and expand into use cases such as military operations, video surveillance and restaurant franchises.
“Our goal is to grow it into the number one or number two market position,” Shah said. “We think we can double the business in the next two-and-a-half years.”
Shah is the second former Brocade executive to become Drobo CEO. Tom Buiocchi, who replaced Barrall at Drobo in 2009, had been Brocade’s VP of marketing.
Cloud Provider Axcient launched an appliance-less, direct-to-cloud recovery service aimed at small-to-medium sized businesses or remote office data protection.
The company has been selling a physical appliance for several years and last year it introduced a virtual appliance to replicate data and applications to the cloud. This appliance-less recovery service is focused on environments where an “appliance on-premise is over kill,” said Todd Scallan, Axcient’s vice president of products and engineering.
Scallan said customers access the service by downloading an agent from the Axcient website. The service offers image-level replication to the cloud, recovery and server fail over, file-level recovery, full-image restore and bare metal recovery. If data needs to be recovered, the point-in-time snapshot is selected and downloaded. Axcient can ship a thumb drive or 2 TB or 3 TB drives if a new image is required.
“We’ve always had an appliance for on-site, whether it is virtual or physical,” said Scallan. “The is a new offering where neither is required. Data is encrypted and you get incremental change-block recovery capability the same as the appliance-based cloud. There is a cloud-based user interface and you see all the point-in-time snapshot replicas to the other site.”
Scallan said organizations that have business critical servers want appliances for rapid recovery rather than recovery over the Internet. Those organizations that require a recovery service without an on-premise appliance “are only limited by the bandwidth they have,” Scallan added.