For IT clients, the terms have different meanings depending on the responsibilities of the person talking. Preconceptions (or misconceptions) give color to what motivates customers in managing information. The application owner or business owner sees backup as something for which IT is responsible for. At the same time, IT sees archiving as a possible impediment to success on because it could make it more difficult to access needed information.
Vendors approaches to backup and archiving are driven by their products. For most vendors, backup and archiving are usually combined and their messaging may cover both at the same time. This may not serve the vendor well because of the different customer perceptions and people served.
There are a few basics about the terminology that need to be understood along with some recommendations:
Backup is really about data protection. Data protection should be the top level message and is a continuum that includes replication and point-in-time copy (snapshot). Today, backup is an IT function where the backup group in IT serves the overall business – both applications and systems.
Archiving is really about information management. For the IT backup guys, it is just another form of backup and usually is thought of as backups that are being kept (retained backups) rather than part of a rotation. For the application owner, an archive is about moving some data and making it difficult (or delayed) to access. A storage administrator sees archiving as a migration between tiers and a way to reduce the primary capacity demand as part of capacity management.
The archiving discussion must be separate from data protection, although there is a data protection component in archiving. IT rarely takes initiatives to implement archiving practices (other than retained backups) for several reasons:
• Usually IT is not empowered to make decisions about application data from business owners. The idea that data can be made less accessible or deleted is not something IT people believe they have the authority to do.
• IT does not want to be wrong and cause an impact when it comes to making a decision about the data. The negatives outweigh the improvements that may be made by implementing an archiving strategy.
• The assignment for archiving in IT usually lands in the purview of the backup manager/administrator. Managing the backups is challenging and archiving is seen as moving individual elements such as files that are too fine-grained for the backup process.
The archiving practice needs to focus on the application and business owners who ultimately are responsible – both for the application use and the economic costs. The approach should be about moving data to a content repository that is appropriate for the diminished probability of access. The content repository is less costly, but must still be directly accessible by the application and the information (typically files) visible to the application owner. The content repository is not about files stored inside a backup format but as individual elements (files) that are, in the application owner’s terminology, online.
From the application owner’s perspective, IT is not involved in the access. There should still be a discussion about “deep archive” repository for data that is not expected to be needed again but cannot be deleted. Again, this is an application owner decision but the mechanics are implemented by IT.
When it comes to backup and archiving, terminology matters. There is context for usage and dependencies on who is involved in the discussion. Archiving must be considered in the context of the application. To counter the preconceptions, the discussion should be about application content repositories rather than an archive. The concept of a deep archive is still highly valuable. The archive discussion needs to be with the application owner. Backup needs to be put in the broader context of data protection and separated from archiving. This makes the discussion more relevant to those involved. It also makes it an easier discussion to have.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Gridstore, which this year changed its strategy and product to focus on optimizing storage for Microsoft Hyper-V, today closed an $11 million funding round to bring its new system to market.
Gridstore CEO George Symons said the vendor will expand its sales team to sell the new product, also called Gridstore. Symons said Gridstore has converted all of its beta sites to paying customers, and is looking to add about 10 people to the 32-person company.
With Gridstore 3.0, the vendor switched from selling scale-out NAS grid storage for SMBs to storage for mid-market companies using Windows Server 2012 and Hyper-V. Gridstore installs as a virtual controller that runs on the host and provides quality of service on a per-VM basis. It has hybrid configurations that use PCIe flash and arrays that are 100% spinning disk.
Symons said sales of the new systems are averaging 36 TB compared to 6 TB in previous versions. He said Gridstore competes mainly with NetApp FAS, EMC VNX and Nimble Storage CS arrays. He said about half of its customers use the systems for backup and the other half for primary storage.
Symons said he expects to get a sales bump as Hyper-V catches on.
Part of the funding will also go towards product development. Symons expects an upgrade around June. He said Gridstore will eventually move into VMware and release all-flash versions of its array, but not in the next release.
“That will focus more around management and performance enhancements,” he said. “We’ll add grid-based snapshots for VSS [Volume Shadow Copy Service] and zero-copy clones.”
The zero-copy clones should make Gridstore a better fit for virtual desktop infrastructure (VDI) storage, he said.
“Architecturally, we’re well set up for VDI because of the way we spread data across multiple nodes,” he said. “We can take advantage of write-back caching.”
Because Gridstore is software that runs on commodity hardware (Dell servers), the vendor labels the product software-defined storage. Symons said he was reluctant to use the term, but was surprised to find it resonates with customers.
“I hesitated to use the term,” he said. “I thought, ‘does anybody care outside of industry people?’ But customers like the fact that there’s something new and it’s why we’re different. The fear was that term can mean so many things, but reception has been tremendous.”
The funding brings Gridstore’s total to $23.5 million over two rounds Acero Capital led the round with previous investors GGV Capital, Investec Ventures Ireland Limited, and Onset Ventures participating.
Symantec Corp. has put its channel partners and managed service providers (MSPs) on notice that it is shutting its backup cloud service, known as Symantec Backup Exec.cloud, effective on Jan. 6, 2014.
The company has not made an official announcement public but it sent an email to its reseller partners on Monday, Nov. 25, 2013 informing them there will be no new sales or renewals for the service as of Jan. 6, 2014. Also, customers and partners will not have access to the service, data or technical support service and technical support as of Jan. 6, 2015.
Jerry Gowen, Symantec’s group manager for worldwide communications confirmed the news via email. The email sent to partners stated, “As you know, one of our primary goals is to delight you and your customers with our product offerings. Taking this into consideration and carefully evaluating the overall needs of our customer base, we have made the difficult decision to begin the process of discontinuing Symantec Backup Exec.cloud,”
In the message, the company stated its other cloud offerings would not be affected, which include Symantec Endpoint Protection cloud, Symantec Endpoint Protection Small Business Edition 2013 or on premise Backup Exec and NetBackup software.
“I would assume the solution is not a big money maker for Symantec,” said Pushan Rinnen, research director for storage technologies and strategies at Gartner. “This is not a focus for them and that is why they want to move away from it. It didn’t come as a surprise to me because I didn’t get the sense that it was a gigantic success.”
That’s news to Eran Farajun, executive vice president at Asigra, a backup vendor that has strategic alliances IBM, NetApp and Cisco for its Asigra Cloud Backup product. Farajun said Symantec representatives told him the company was managing “petabytes” of data via Symantec Backup Exec.cloud.
“I think this is interesting and surprising,” said Farajun. “There are a lot of partners that are reselling this. This is Symantec. This is a tier-one company. I thought everything was hunky dory. Then they put this email out, and everybody is saying ‘Huh?’ They can’t decide to act like a risky startup and just get out. There still are a lot of answers that need to come out.”
Symantec’s Backup Exec.cloud service is targeted at small-to-medium size businesses. Symantec partners with Savvis and Rackspace to store data in two East Coast data centers and several in Europe, according to Rachel Dines, senior analyst for infrastructure and operations professionals at Forrester Research.
Dines said Backup Exec.cloud was released in 2011 and the product lacked the features and functions that other cloud backup services offer.
“It seemed like it never matured,” she said. “It had limitations. Scalability was an issue and management was another problem. It was only for Windows. The PC backup feature was basic and it could not compete with other PC features that do sync and share. Plus, it was expensive and that didn’t help. It was not extremely successful and they figured it’s better off to cut their losses and get out.”
While Symantec has made the decision internally, the portal for the Backup Exec.cloud still shows the product is available as of this evening. In a live chat, a sales representative apologized for the confusion.
“Apparently, it is not available for purchasing. It’s not public yet,” the person said. “I am happy to discuss our other backup solutions with you that are very successful and popular products.”
Rinnen said she was briefed on the company’s plans on Nov. 5. She was told an announcement would be sent to partners on Nov.4 and the portal would be shut down by Dec. 2, 2013 for new customers.
“Sounds like they pushed it back,” she said.
Last month’s Storage Networking World (SNW) conference in Long Beach, Calif., was the swan song for the semi-annual industry event in the United States.
Computerworld/IDG and SNIA, which launched SNW in the United States in 1999, have decided to end the U.S. version of the storage conference. The two groups will be organizing separate events in the future.
The official SNW website states, “Computerworld/IDG and SNIA have decided to focus our individual conference resources on producing events that cover an expanded storage innovation market and to conclude the production of SNW U.S.”
“It was just that IDG and SNIA had different goals in moving forward,” said an IDG spokesperson who requested anonymity. “We thought it was best to produce our own events.”
IDG intends to incorporate storage in their broader conferences such as CITE Conference + Expo, Open Business and Data +. CITE covers technologies ranging from mobile storage to Big Data that use consumer devices in the enterprise. The Open Business event primarily focuses on Big Data.
SNIA, a group that consists mainly of storage vendors, will go forward with storage-related shows such the Data Storage Innovation Conference and Storage Developer event.
Held every spring and fall, SNW was a major industry influencer until the late-2000s when large storage vendors began focusing more on their own conferences. Attendance at SNWs dropped significantly over the last five years or so, especially among the vendors. During one recent show, a vendor had a booth set up with no one even staffing it.
There was widespread speculation in the industry that SNW would reduce from two shows a year to one, but the SNW parent companies decided to cancel U.S. shows. It’s unclear if SNW Europe – held once a year — will continue. “We are not sure yet,” said a SNIA representative who requested anonymity when asked if the European SNW show would continue.
EMC has merged its VMAX, VNX and VNXe development teams into one group. According to a blog by EMC president and COO David Goulden, this move will not result in any platform changes. Development teams will remain the same as they are now. Sales teams are already consolidated across the VMAX and VNX platforms.
The move could bring a change to the management software across platforms, however.
Eric Herzog, senior vice president of marketing for the new Enterprise and Midrange Storage Division (EMSD), said one of the goals of the consolidation is to help customers who have multiple platforms manage them better.
“A lot of our customers buy all three products [VMAX, VNX and VNXe],”Herzog said. “Today we have three versions of Unisphere. They look and act the same, but you need one version to launch VNX and a different version to launch VMAX. A lot of enterprises want one version of the product.”
Herzog said it is unlikely that EMC will have the same management application across all three platforms, but the idea is to have one pane to manage all three. “Think it about it more like Adobe,” he said. “You have Acrobat, Acrobat Pro and Acrobat Reader. You launch one and you see all three.”
Unlike when EMC consolidated its Clariion and Celerra platforms into the VNX unified midrange array, this latest move will not eliminate any hardware systems.
“There are no changes to any products, product roadmaps or the way we take our products to market or how we support our customers,” Goulden wrote in his blog.
That may be seen as bad news for those who claim EMC has too many storage array platforms. Along with the products in the new EMSD division, there is Isilon for scale-out NAS, Atmos for cloud and object storage, and the new all-flash XtremIO. EMC execs maintain having different products for different workloads is the best way to go.
“There is some product overlap, but EMC always has some product overlap,” Herzog said. “We don’t see the one platform-fits-all-strategy that one vendor likes to talk about.”
Brian Gallagher, who ran the VMAX team, is president of EMSD. Rich Napolitano, formerly president of the VNX group, will lead a new project inside EMC focused on next generation IT for multi-cloud environments.
Violin Memory’s first quarter as a public company was rocky, and the second quarter doesn’t look much better for the flash array vendor.
Violin reported earnings Thursday for the first time as a public company. It’s $28.3 million in revenue increased 37% from last year, but missed analysts’ expectations by $3.4 million. Violin’s net loss of $34.1 million was greater than expected, and $8.7 million more than it lost in the third quarter of 2012.
Its forecast of from $30 million to $32 million in revenue for this quarter fell far below expectation of $43.6 million.
Like executives from other storage vendors that struggled last quarter, Violin execs blamed the federal government shutdown for the revenue shortfall. And the forecast was based on expectations of another lean quarter of federal spending due to continuing political uncertainty.
Violin CEO Don Basile said the company’s PCIe flash card is off to a slow start, with less than $1 million in revenue in the quarter.
When Violin launched its Velocity PCIe card in March, Violin execs hinted there would be an OEM deal with its NAND flash partner Toshiba to sell the cards, but that has yet to materialize.
Basile said Violin was hoping for $10 million in booking from the federal government last quarter, and finished with $2.6 million. He said Violin added 32 new customers last quarter, up from 30 in the previous quarter.
Despite the bump, Basile said Violin’s long-term prospects haven’t changed. “The market we serve is large and we are well positioned to take advantage of the long-term trend of flash in the data center,” he said on the earnings conference call. “We have a strong, deep relationship with Toshiba. Fundamentally, our growth drivers remain intact.”
Investors are unconvinced. Violin priced its initial shares at $9 in September, but they opened at $6 per share today.
In a note to customers today, Stern Agee financial analyst Alex Kurtz wrote that Violin’s 32 new customers “is a modest number for a new vendor in the market that should be challenging the incumbents with a better price/performance platform.” He added that EMC’s XtremIO launch could hurt Violin this quarter.
Basile said he is not worried about XtremIO because EMC’s entrance into the all-flash market shows a need for that type of product. As for the array itself, he added, “it appears to be a limited product with a limited set of features.”
Despite all the established and emerging storage startups on the scene, EMC’s top executives say cloud giant Amazon is the competitor that worries them the most.
At the UBS Global Technology Conference this week, EMC executive vice president Jeremy Burton was asked about EMC CEO Joe Tucci’s recent comment to a market research analyst that he was more concerned about Amazon than other competitors.
Burton said Tucci’s answer had as much to do with the lack of challenges from traditional competitors than Amazon’s strength, but admitted Amazon’s cloud is taking business from EMC.
“If I look at our traditional competitors, I would argue that they’ve never been weaker for a variety of reasons,” said Burton, who heads EMC marketing and product operations. “But Amazon is a beast you know a lot less about. They’ve got a different approach. They are soaking up a lot of the spend in what we traditionally have called shadow IT. So they are building a beachhead in an area where typically we’ve not frequented. We know that they’ve got technology and I think that’s a combination you always take very, very seriously.”
Burton said he did not agree with those who claim a handful of “mega clouds” will dominate IT. He said regulatory and privacy issues will prevent that, as well as the need for traditional IT infrastructures. He said a well-run private cloud with a similar architecture can be cheaper than going to Amazon.
Burton said Amazon’s numbers show that only around 10% of Amazon Web Services (AWS) revenue is going to the enterprise, and likened Amazon’s threat level to that of a fast-growing startup.
“So I don’t subscribe to the view that world domination and the end is near, but they are a competitor that we take seriously,” Burton said.
Burton downplayed any price advantage Amazon has, saying “we don’t put out press releases when we reduce prices and they do. Over the last five years the storage industry in general has reduced prices roughly about 21 percent annually” while Amazon has reduced prices around 14% to 19%.
What Amazon has going for it, he says, is “they have made it easy and IT typically has not been easy to deal with. And so the opportunity for the vendors – EMC being one – is for us to provide something that is as easy to consume as an AWS S3 service.”
He said EMC’s “Project Nile” is a step in that direction.
As for the traditional storage competition, Burton said IBM, Hewlett-Packard, Dell and NetApp are all weaker than they were five years ago. When asked about promising startups, he said few of them have the breadth to take on a company the size of EMC. He said of about 80 startups on EMC’s radar over the last six or seven years, 10 went out of business, 26 were acquired and two went public.
“And I think the two that went public probably wish they had been acquired,” he said, an apparent slap at Fusion-io and Violin Memory. The shares of both of those flash vendors are trading at far below their IPO prices.
“A lot of these [startups] solved a certain part of the problem. The exit for them is to build a revenue stream, solve a small part of the problem and then look to be acquired,” Burton said. “That’s been the history of the storage industry … If they do their job right, they will become a feature in a bigger company’s portfolio.”
Two of the storage startups with large bankrolls will have to spend a big piece of their cash lawsuits rather than business.
EMC Inc. fired a legal salvo at all-flash startup Pure Storage Inc. and NetApp Inc. has sued hybrid storage vendor Nimble Storage Inc. In both cases, the establish vendors allege that former employees who are now employed by the two startups stole trade secrets, customer lists, and solicited other employees in violation of employment agreements.
Both industry heavyweights are seeking monetary damages, injunctive relief to stop the defendants from using alleged stolen materials, and the return of alleged company secrets. They are also making sure everyone knows how they feel about their upstart competitors.
Pure completed a $150 million funding round in August, and has a total of $245 million in venture funding. Nimble has $98 million in funding. Both startups plan to go public, and Nimble already filed its S-1 registration for an initial public offering. They also both face serious legal bills now.
EMC claims the theft of confidential information by former employees “arises out of a deliberate scheme advanced by Pure Storage through a nationwide pattern of collusion ….”
NetApp’s complaint describes Nimble as “a company built on unlawful hiring and business practices.”
EMC v. Pure Storage
In its complaint filed in the U.S. District Court in Massachusetts on Nov. 4, EMC said the theft of “tens of thousands of proprietary, highly confidential, and competitively sensitive EMC materials” by former employees and in the possession of Pure Storage are in violation of the Key Employee Agreements (KEAs) each former employee signed when they joined the company.
The agreements require employees to return any EMC materials in their possession when they leave the company, not to divulge any “company secrets” after leaving the company, not to solicit any EMC customers as employees of Pure Storage, and not to solicit any current EMC employees to leave the company.
The EMC complaint alleges that “These claims arise from conduct apparently orchestrated by or known to the highest executive management levels of Pure Storage.”
Most of the former EMC employees named in the suit are in sales, and the lawsuit is weighed heavily towards allegedly stolen sales trade secrets, including customer lists and “sensitive pricing solutions and strategies custom-tailored for each individual customer.”
Pure Storage CEO Scott Dietzen returned fire at EMC in a blog post Nov. 5, claiming EMC’s charges have “no merit whatsoever,” that Pure Storage will defend themselves vigorously, and that it has the resources to do so – citing the company’s recent funding round.
Dietzen also criticized EMC’s own hiring practices, claiming that “in general more mature companies risk forgetting the golden rule—they are happy to recruit great people to join their companies from competitors (indeed they aggressively solicit such hires), but then resort to onerous non-compete agreements and lawsuits to deter the same employees from exercising their freedom to seek employment elsewhere.”
NetApp v. Nimble
NetApp’s filed its lawsuit against Nimble and three former employhees Oct. 29 in the U.S. Northern California District Court. It claims that two of the three former employees violated the Computer Fraud and Abuse Act by using unauthorized access to NetApp’s computer systems to acquire confidential and proprietary information and pass the information on to Nimble.
NetApp also alleges that the three former employees violated their NetApp employment agreements by taking or keeping proprietary NetApp materials, and soliciting NetApp employees to join Nimble.
Generally, lawsuits against former employees that involve non-compete and employment agreements that last after an employee has left a company are hard to win because the courts view such agreements as restraint of trade that could hinder a person’s ability to gain employment.
But these cases center more around people who joined direct competitors directly after leaving the plaintiff companies and whether they took sensitive information with them that is helping their new companies gain competitive advantages.
Ultimately, the question probably won’t be how “onerous” the EMC and NetApp employee agreements are. The key legal questions are whether the courts uphold the agreements, if the former employees breached the agreements, and whether EMC and NetApp suffered harm.
NetApp unveiled a controller and memory upgrade to its EF all-flash array system today, less than a week after EMC finally made its XtremIO flash platform generally available.
The EF550 replaces the EF540 that NetApp launched in early 2013. George Kurian, NetApp’s executive VP of product operations, said the vendor’s other flash platform – the FlashRay – will go into beta before the end of the year but won’t be generally available until 2014.
NetApp claims the EF550 delivers more than 400,000 sustained IOPS, around 100,000 IOPS more than the EF540. The new system uses 800 GB multi-level cell (MLC) SSDs, and scales to 96 TB in a 24u enclosure. A base system holds 12 or 24 drives, and can scale to 10 12-drive enclosures or five 24-drive enclosures.
NetApp claims it has shipped more than 550 EF540 arrays this year. “We believe that puts us in the number one or two market position for all-flash arrays,” Kurian said.
NetApp likens the performance of one EF550 enclosure to that of two full racks of traditional spinning drives. Kurian said database and virtual desktop infrastructure (VDI) acceleration are the major use cases for the EF flash platform.
Unlike the FlashRay, which will have a new operating system designed specifically for flash, the EF550 uses the same SANtricity operating system as other E-Series systems. During NetApp’s earnings report call last week, CEO Tom Georgens said the EF series “should lay rest to the canard” that flash storage systems need new disk controller technology to work.
The EF5400 was part of an E-Series launch that also included the E2700 for remote offices and the E5500 high performance midrange system. Those block storage systems replace the E2600 and E5400. The E2700 supports 12 Gbps SAS and can scale to 768 TB with 4 TB drives. The EF5000 supports 16 Gbps Fibre Channel along with 10-Gig Ethernet iSCSI and InfiniBand and can scale to 1.5 PB. The E2700 and E5500 can support SSDs for hybrid configurations.
Hyper-converged storage startup SimpliVity’s executives were in hyper-funding mode the last few months. SimpliVity closed a whopping $58 funding round today, bringing its total to $101 million over three rounds.
CEO Doron Kempel said SimpliVity will use the cash to significantly grow the size of the company and its sales. SimpliVity’s OmniCube stack includes storage, server, and VMware hypervisor in one box, with the ability to cluster 40 units.
“This gives us a lot of dry powder and we plan to triple the size of our organization next year and multiple our sales by five times,” Kempel said of the funding round.
He said SimpliVity has around 130 employees now. He won’t disclose revenue but said the startup has more than 100 customers, many with more than one OmniCube. He said one customer bought six systems in 17 days.
Kempel said SimpliVity will add more form factors and capabilities next year. You can expect a smaller system in the 2 TB to 3 TB range for remote offices and support for the KVM hypervisor early in the year and Microsoft Hyper-V to follow.
When asked how he raised so much money, Kempel said the investors agreed with him that SimpliVity can take over the data center. “The IT stack has 12 products,” he said. “VMware virtualizes the servers, and we virtualize everything else.”
Nutanix, Scale Computing, and Pivot3 also sell hyper-converged hardware stacks, and software players are getting into the game. Last week Maxta came out of stealth with software that pools capacity and processing power on virtual machines. VMware’s Virtual SAN (vSAN) – currently in beta – behaves similarly.
Kleiner Perkins Caufield & Byers (KPCB) Growth and DFJ Growth venture companies led the SimpliVity funding round, with Meritech, Swisscom Ventures, Accel and Charles River Ventures participating.