Veeam Software is months away from launching Backup & Replication 8 for virtual machine backup, but the vendor today revealed the upgrade will support NetApp storage arrays and data protection applications.
The integration means Veeam’s Backup & Replication Enterprise Plus customers can back up from storage snapshots on NetApp arrays, and all Backup & Replication customers can recover virtual machines, individual files and application items from NetApp production storage through Veeam’s Explorer for Storage Snapshots.
Doug Hazelman, Veeam’s VP of product strategy, said the VM backup specialist is integrated with NetApp’s primary storage as well as its Snapshot, SnapVault and SnapMirror applications.
“With backup from storage snapshots, we can initiate a snap on a primary storage array, back up on an array and send the snapshot into SnapVault,” Hazelman said. “We get application-consistent VMs. Now we’re application consistent on backups as well as SnapVault.”
Veeam is far from the first backup software vendor to support snapshots on NetApp arrays. CommVault, Symantec, Asigra and Catalogic are among those who support NetApp snapshots. Even EMC – NetApp’s chief storage rival – is adding support for snapshots on NetApp NAS in its new version of NetWorker software.
Veeam first supported array-based snapshots for Hewlett-Packard’s StoreVirtual and 3PAR StoreServ arrays in Backup & Replication 6.5 in 2012 with the promise to support more software vendors. NetApp is the second storage vendor Veeam supports.
Hazelman said Veeam picks its array partners according to customer demand as well as “how easy it will be to work with that vendor.” He would not say which array vendor Veeam will support next.
Veeam’s Backup & Replication 8 is expected to be generally available in the second half of this year
LSI Corp. introduced the latest model to its Nytro product family, the Nytro MegaRAID 8140-8e8i card that accelerates application performance and provides RAID data protection for direct attached server (DAS) environments.
The LSI Nytro MegaRAID cards are part of the Nytro product portfolio of PCIe flash accelerator cards. The newest card doubles the capacity to 1.6 TB of usable onboard flash compared to the previous Nytro MegaRAID cards.
The Nytro MegaRAID 8140-8e8i card integrates an expander into the architecture to provide scale-out server environments with connectivity for up to 236 SAS and SATA devices through 8 external and 8 internals ports. The total of 16 SAS ports are for both hard disk drives and JBOB connectivity.
“We are seeing a lot of demand in scale-out DAS,” said Jason Pederson, senior product manager at LSI Nytro solutions. ‘The demand we see so far is for a lot of Web hosting companies. The card will be available in the second quarter. We are in the final stages of testing.”
The MegaRAID 8110 and 8120 support up to 128 devices. The 8110 supports 800 GB of storage, while the 8120 supports 800 GB of storage.
The MegaRAID design is geared towards scale-out servers and high capacity storage environments. LSI first launched its Nytro Architecture product family in April 2012, combining PCIe flash technology and intelligent caching. LSI claims it has shipped more than 100,000 Nytro cards worldwide since introducing the products.
The card’s 1.6TB of onboard flash for intelligent data caching allow server solutions, particularly hyperscale environments such as cloud computing, Web hosting and big data analytics, to maximize application performance where data traffic is heavy.
The company also has introduced Nytro flexible flash so that 10 percent of the flash can be used for data stores and 90 percent for cache, or all the flash used for storage and none for cache or 100 percent used toward cache.
An early open-source storage player is back, seven years after going out of business mainly because it was ahead of its time.
Open Source Storage (OSS) re-launched in January with an undisclosed amount of funding from private investors and has since released two product lines.
OSS first launched in 2001 and was gone by 2007 despite landing a few big customers including Facebook.
“We were the guys who pioneered this,” said OSS CEO Eren Niazi, who was 23 when he founded the company. “We started the open source storage movement.”
He said the movement stalled because the storage ecosystem did not warm to OSS. “A lot of tier one vendors didn’t want to work with us and investors didn’t want to back us,” he said. “The business came to a halt.
“Seven years later, people say ‘Open source storage, I get it, it’s exactly what I need.’’
Of course, OSS faces a lot more competition in 2014 than it did in 2007. Today there are open source storage options such as OpenStack, Hadoop, Ceph, and products built on ZFS. Still, adoption remains low as vendors such as Red Hat, SwiftStack, Cloudera, Nexenta, Inktank and now OSS are trying to break out.
OSS’s Open Cluster software can run on solid-state or SAS drives and supports block or file storage. Niazi said OSS has more than 30 customers with others evaluating the software. He said his products are used mostly by enterprises “with large data demands and large deployments and are trying to reduce their costs.”
OSS produces are based on its N1.618 Plug and Play Middleware and open-source software. Last month it brought out the Open Cluster ZX that scales to 1,024 nodes. Open Cluster ZX is built for virtual servers based on OpenStack as well as NAS, object storage and virtual machine-aware storage. OSS this week added its Open Cluster Cloud Series designed for virtual servers, cloud-based services, high performance computing and big data analytics. The cloud series comes in two-node and four-node bundles.
Looking at advanced features is always a critical step when reviewing storage systems because of the value these features bring. There are a large number of storage system-based features, and variations exist between different vendor implementations. But taking a step back, it is interesting to examine where these features really belong. They were developed in storage systems to fill a need and each feature could be applied at a single point, regardless of the different hosts accessing the information.
To start a discussion about where the features really belong, let’s examine the more commonly used ones. This is not a call for change because change is unlikely. It is a discussion that may assist in some understanding of how these features can help.
Encryption. Encryption should be done at the time an application creates or modifies data and before the data is transmitted out of the application’s control. This means the data would be encrypted when it is transmitted over any network to a storage location. Access to data from another server would require the authentication and the encryption keys for decrypting the data. Encrypting data at the storage system is not the best location because applications can access the data without controls regarding encryption. Encryption at the storage system is protection from physical theft of devices.
Data protection through backup. Decisions about backup should be made by the application owner or business unit. In most environments, IT makes a broad-based decision about data protection as a standard policy and applies that to data – usually on a volume basis. The actual value of the data and the corresponding protection requirements may not be known (or as continued knowledge) by IT. Data protection should be controlled and initiated at the application level. The same holds true for restoration.
Data protection for Disaster Recovery and Business Continuance. Recovering from a disaster or continuing operations during a disaster or interruption is a complex process and requires coordination between applications and storage systems. Storage systems make replicated copies of data to other storage systems and the process to failover or recovery is orchestrated according to a set of rules. If the applications could make the copies (simultaneous writes to more than one location) without impact to operations (in performance, etc.) the storage system would still store the remote copy but the recovery process and coordination would be done by the application at a single point with knowledge of the information requirements.
Compliance management. This is a set of features that help meet different regulatory requirements. Each of those represents a challenge. Only a few storage systems do all of these:
- Retention controls – these protect data from being deleted until a certain time, event, and/or approval. These can be done in software if the software is the only method to access data. Because this may defeat the flexibility of how data is stored for archiving, the storage system may be the best location for this function with software controlling the operation.
- Immutability – protection from data alteration has been implemented in storage for some time. With many sources of access, immutability at the storage system would still seem to be the best location.
- Audit trail for access to data – requirements to track access to data can be done in the application if there is no way data could be accessed otherwise. Typically there may be other means, so having the storage system handle the audit trail is probably the best solution.
- Validation of integrity – this requirement is a typical feature of storage systems anyway so continuing this in the storage system would be expected.
- Secure deletion – this is implemented as a digital overwrite of the data on the storage device. Because of device characteristic knowledge, this should continue to be part of the storage system.
- Legal hold – this has the same issues as retention controls.
It’s unlikely that we’ll see any substantial change in the way things work regarding storage features in existing IT environments. New implementations for private clouds may do things differently, which means applications would have different requirements. It will be interesting to see what develops as private clouds are deployed within traditional IT environments.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Backupify today disclosed its 2014 roadmap will be its most ambitious by far with plans to add support for 12 cloud applications this year including Box and Drobpox. Other cloud applications on Backupify’s 2014 roadmap include Netsuite, GitHub, Zendesk, Concur, ServiceNow, JIRA, Asana, Egnyte, Office 365, and Basecamp.
Backupify CEO Rob May said backup for Box will probably be Backupify’s next product, coming within the next month or so. The roadmap calls for support for NetSuite, Egnyte and JIRA in the first half of 2014 with the rest on the list to follow later in the year.
May said Backupify can add backup products faster now because of the new developer platform it launched last October with a set of open APIs.
“You’ll see an increased frequency of launches as the year goes on,” he said. “We’ll add maybe an application a month for three or four months, then an application every week and maybe an application a week by the end of the year.”
Amplidata, an early object storage vendor, picked up $11 million in funding this week. That’s a small haul compared to recent funding rounds for storage companies, but CEO Mike Wall said it will fuel a two-pronged strategy for Amplidata.
On the technology front, Amplidata is moving towards a software-only model. Today it sells its AmpliStor software with erasure coding packaged on commodity hardware, either its own bundles or through partners.
That brings us to the second part of Amplidata’s strategy – expanding its partnerships. Wall said he is working on more deals such as the OEM relationship it has with Quantum, which uses AmpliStor on its Lattus archiving platform. Quantum was among the investors in AmpliStor’s new funding round.
“I want to be software-only,” Wall said. “Early on we shipped hardware, and we still do. But Intel has put out a reference design for hardware vendors, and we’re in a position where we have a world-class erasure code stack that can run on pick-your-flavor hardware. Our business model will be, you can buy your hardware at the best price with the best performance and best quality you’re comfortable with, and we’ll provide you with the software.”
Wall said Amplidata will maintain its channel but he expects the bulk of its revenue to come through partners. He said he is in various stages of discussions with large potential partners, including a telco/ISP company that is in beta now with a cloud storage offering using AmpliStor.
Regardless of how AmpliStor is sold, Wall said he expects object storage momentum to accelerate over the coming months. “Five years ago, this was a nice-to-have technology but not a must-have,” he said. “Over next 12 to 24 months, it will be a must-have. When you look at the amount of data generated and the size of disk drives and data sets, traditional RAID methods are not sufficient. You need erasure coding. When you do all that in software and on commercial off-the-shelf hardware, it’s too compelling to ignore.”
Intel Capital led Amplidata’s latest round, which brings Amplidata’s total funding to $33 million. Hummingbird Ventures, Endeavor Vision, Swisscom Ventures and Quantum all participated. All were previous investors.
Today, Google took its turn dropping its cloud storage prices.
Google today announced it was cutting pricing by as much as 68 percent for its cloud storage services, while also eliminating pricing for tiered services and introducing a flat rate for its Google Cloud Storage standard and Durable Reduced Availability (DRA) storage.
The price for the Google standard storage is 2.6 cents per Gigabyte per month and the DAR costs is down to 2 cents per Gigabyte a month. Previously, Google’s cost structure was more complicated because customers paid a higher price on the first terabyte stored and the price per terabyte dropped as the capacity stored in the cloud grew.
“This is the most dramatic price drop we have seen and it’s the most dramatic change of the model as it goes from tiering to a flat rate,” said Nicos Vekiarides, CEO of TwinStrata, a cloud storage gateway vendor whose products move data to the Google cloud. “We were informed as a partner. Did we expect it to be this dramatic? I think a lot of folks are surprised right now.”
Competitors like Amazon and Microsoft Azure will likely respond in kind, given the pattern of these price drops in the past. Cloud providers have been engaged in a price war in what analysts characterize as a land grab or “race to the bottom.”
“What is interesting is how they will respond,” Vekiarides said. “Traditionally, it’s been an even playing field and they all kept it that way.”
Google’s price slash makes its storage services one of the cheapest on the market. The only options that cost less are cold storage services Amazon Glacier, priced at a penny per Gigabyte per month, and EVault LTS2 Local-Redundancy at 1.5 cents a Gigabyte per month. However, both Glacier and EVault have extra costs baked into their offerings.
“Both have greater charges for taking data out, particularly if they do it sooner than 90 days,” said Lorita Ba, TwinStrata’s director of marketing.
The low pricing is designed to lure more customers into the cloud but it’s not the only variable that companies look at when considering moving data to the cloud. They need to look at performance, how the cloud service integrates with environments, and how it helps solve the issue of maintaining capacity and growth along with how it works in a disaster recovery situation.
“There are a lot of elements to cloud storage solutions,” Vekiarides said. “But this enhances the value of replacing on-premise storage with the cloud. It makes the economic case that much easier.”
Copy data management vendor Actifio closed a $100 million funding round today. The round is likely the last funding it will need, and brings its total funding to $207 million and its valuation to more than $1 billion.
Andrew Gilman, Actifio’s director of global marketing, said the vendor will follow with a product launch soon moving its software down into the mid-market. Actifio has focused on large enterprises and cloud providers, with an average selling price of $349,000.
The Actifio software creates virtual copies of data so it can be placed in any location and used for multiple purposes. The first use case to gain traction was backup but it has other use cases for companies looking to reduce the copies of data they store.
Gilman said most (51%) of Actifio’s customers use its product for data protection, with 22% using it for resiliency (business continuity/disaster recovery), another 22% percent for test/development and the rest for analytics. He said 22% of the customers displace EMC, 15% displace Symantec, 13% displace CommVault and the rest displace other vendors’ products.
Actifio claims the funding comes after a big 2013 year in which its bookings grew 182% over 2012. The vendor claims more than 300 enterprise users worldwide, and more than 25 cloud providers including IBM’s SmartCloud Data Virtualization and SunGard’s Recover2Cloud DR services. Actifio has customers in 31 countries.
Gilman said Actifio has about 260 employees and intends to grow significantly with the funding. He said an initial public offering (IPO) is planned but Actifio is in no hurry because “we’re very methodical in everything we do.”
“We want to continue to build out our platform and take the product into new areas,” he said. “We will use this to go down into the mid-market and invest in end-user success.”
Along with new product releases, Actifio will invest in what it is calling a Customer Success Engineering group led by David Chang, current VP of products and a founder of the company along with CEO Ash Ashutosh. “It’s important that each of our users is delighted,” Gilman said. “We take this seriously as data custodians.”
New investor Tiger Global Management led the round with previous investors North Bridge, Greylock IL, Advanced Technology Ventures, Andreessen Horowitz, and Technology Crossover Ventures participating.
Symantec’s stunning firing of CEO Steve Bennett leaves the company with a third CEO in less than two years.
The move came as a big surprise because Bennett spent about half of his 18 months as Symantec CEO plotting a turnaround plan, and implementation of that plan is far from complete. Following the departures late last year of president of products Francis deSouza and CFO James Beer, Symantec will have a vastly different leadership team after a replacement is found for Bennett. Board member Michael Brown is the interim CEO.
The statement Symantec released Thursday about the firing quoted chairman Daniel Schulman saying the decision “was the result of an ongoing deliberative process, and not precipitated by any event or impropriety.” Financial analysts point to lack of growth and loss of market share on the antivirus/security side as Symantec’s biggest problems, but there have been issues on the backup and storage side as well. The biggest problem was the Backup Exec 2012 fiasco, which still is not fixed more than two years after its initial release.
Perhaps the main overall problem with Symantec is it is two companies under one umbrella. Symantec has never made storage and data protection an equal partner with security after acquiring storage software vendor Veritas in 2005. None of Symantec’s CEOs since then – John Thompson, Enrique Salem or Bennett – were storage guys, and there have been intermittent rumors that the storage products would be spun off. Veritas was a major storage market influencer as a standalone company, but Symantec is not seen as much of a force in the storage world.
Security and storage have mostly been separate divisions, and the two-pronged approach hasn’t worked. Symantec’s flagship backup application NetBackup is still doing well, and the decision to sell it as part of integrated appliances has worked out. But BackupExec is a mess, Symantec’s storage management message is muddy, and it is also losing its iron grip on the anti-virus market.
It would help if the new CEO has experience in storage or data protection. Interim CEO Brown has that experience as a former CEO of Quantum and a Veritas board member before the merger. However, indications are that Brown will only hold the job until a permanent CEO is found. Let’s hope the search committee keeps storage in mind when screening candidates.
Tintri, which designs its VMstore storage appliances to be virtual machine-friendly, is releasing a plug-in to let customers manage VMstore inside of VMware’s vSphere.
The plug-in lets Tintri customers manage their VMstore appliances from the vSphere vCenter management tool. It makes VMstore dashboards visible from the vCenter server, and they can get alerts and monitoring information there. They can also set snapshots, clones and replication policies in vCenter.
“The end users care about the ESX application or the virtual desktop or the SQL Server application, not so much the storage system,” said Saradhi Sreegiriraju, Tintri senior director of product management. “We’ve exposed all the information from our VMstore dashboard into vCenter. Anything you can do from the VMstore UI – snapshots, clones, replication or monitoring – you can now do from the vCenter UI.”
TheTintri vSphere Web Client Plugin will be available next week as a download from Tintri.
Tintri’s selling point is it lets customers provision storage from the VM-level instead of having to deal with the LUNs and volumes associated with traditional storage arrays. Its greater integration with VMware comes as VMware moves more into storage with its virtual SAN (VSAN) software that turns hard drives, solid-state drives and compute running on VMware-connected servers into networked storage. VSAN is seen mainly as competitive to hyper-converged storage systems such as those from Nutanix, SimpliVity, Scale Computing and Maxta but it can also hurt VM-aware storage vendors. After all, VSAN enables companies to do many of the same things as Tintri does.
Sreegiriraju said Tintri doesn’t consider VSAN a competitor because VMstore has been on the market for three years and its hardware is tuned to work with VM-aware software. He said VSAN will compete more with traditional storage systems. “VSAN is validating the architectural underpinnings that we have,” he said. “We agree with VMware that you need a system that understands VMs at the VM level rather than at the LUN level.”