In an SEC filing Monday, NetApp disclosed it has paid $128 million to the federal government to settle a Department of Justice probe into its contracting activity with the General Services Administration (GSA).
EMC has also disclosed it’s the target of a similar probe, and late last fall it was also reported that several other IT vendors have been targeted by the DOJ over pricing, including Sun Microsystems, Canon and Cisco.
In exchange for the settlement, according to the filing, “the parties to the Agreement have agreed to release NetApp with respect to the claims alleged in the investigation as set forth in the Agreement. The Agreement reflects neither an admission nor denial by NetApp of any of the claims alleged by the DOJ, and represents a compromise to avoid continued litigation and associated risks.” NetApp “recorded a $128.0 million accrual for this contingency in the third quarter of fiscal 2009.”
When consumer backup SaaS provider Carbonite sued its storage vendor, Promise, for systems Carbonite alleges lost customer data, ESG founder and analyst Steve Duplessie wrote a blog post urging enterprise users to ask tough questions of backup service providers to winnow out providers prepared to offer enterprise-level services. Especially, what does your infrastructure look like — what failsafe mechanisms are in place to prevent data loss? and what service level agreements (SLAs) are provided, if any?
When Carbonite backup SaaS rival SpiderOak came along with a pitch for me about how they’re a) more reliable and secure than Carbonite and b) welcoming Carbonite customers with a 20% discount on a year’s service for switching, I decided to try out those questions on them. What followed was an interesting discussion.
SpiderOak CEO Ethan Oberman says SpiderOak, unlike Carbonite, assembles its own storage systems out of commodity servers and disk drives, purchasing individual components and assembling them under the company’s proprietary storage clustering software. “We don’t rely on a third party pre-assembled storage system” as Carbonite did with Promise, Oberman said. But does putting together its own storage systems make SpiderOak’s more reliable? Not necessarily.
(Side note: SpikerOak isn’t alone here. Whale many storage vendors are betting on their future by selling pre-built systems to cloud service providers, the pitch I hear from those service providers is that their service is more reliable/ more secure / better performing because they built it themselves.)
So if we take the claim that home-built is better at face value, let’s say I was a Carbonite user who lost data, and now I’m looking to switch providers. Assuming I haven’t been totally turned off on the idea of SaaS in general, I think I’d still like to see something definitive in writing from my new prospective vendor, regardless of that vendor’s data center architecture, about data loss and what it’s prepared to offer me on that front.
It took quite a while before our conversation today progressed to the point where we could concede that although data loss is highly, highly, highly unlikely, it theoretically can happen. One of the reasons SpiderOak doesn’t address that possibility outright is because it doesn’t want that possibility in users’ minds. “We take this very, very seriously,” Oberman said. “Losing customer data in this market basically means going out of business.”
But as Duplessie put it, “I know things break. What I don’t know is how often they break, or why, and most importantly – what you do about it.”
Oberman said SpiderOak would probably do the right thing and give consumers their money back in the event of their data being lost. “It’s just ethical business practices,” he said. “We stand behind our product.”
Would he put that in writing?
Well, that opened up another can of worms. SpiderOak, Carbonite, and other consumer-grade backup SaaS vendors don’t offer SLAs or even formal written guarantees about data loss, in part, Oberman said, because of a fear of predatory lawsuits in the consumer world. Why these are more prevalent among consumers than among businesses remains unclear to me, but SpikerOak claims that’s what its legal counsel says. Also, it’s not as easy to assign a value to consumer data vs. corporate data attached to billable hours in order to institute hard financial penalties, and SLAs make the whole service more expensive, Oberman claimed.
For its consumer/SOHO service, SpiderOak’s focus is on cost–it charges about $5 to $10 per month. “Those are pretty cheap numbers–so cheap, in fact, that we can’t offer geographic redundancy economically,” he said. To provide SLA-worthy redundancy, the cost of the service would have to go up. This is something SpiderOak is planning to do by this summer with the launch of a new enterprise-focused backup service, which will be about four times more expensive as the current offering.
In the meantime, Oberman suggested that users attracted to the cost but concerned with the reliability of consumer/SOHO services could theoretically treat them like some companies do internet service providers (ISPs), and deploy two or three of the cheaper services for DIY redundancy. “There does seem to be a gap” between expensive fully-redundant enterprise services and cheaper but less resilient consumer/SOHO services in the market right now, he added.
SNW Orlando 2009 wraps up today, and the economic downturn was the elephant in practically every room at the show. It was the main topic of discussion, a factor in debates over technology trends, and a subliminal part of the background as the vendor presence dropped dramatically compared to previous shows. It was difficult to get a handle on exactly how many attendees there were–a wide range of numbers was floating around– but many of the vendor reps and analysts I ran into at the show exclaimed over how quiet it seemed compared to past years.
Most striking to me was the noise (or lack thereof) around the press room, in previous years a bustling hub of activity as armies of vendor marketing directors and PR reps briefed an equally large cadre of analysts and press. The room was smaller than usual this year, and empty at times.
However, there were still some interesting discussions going on around the show, including sneak previews of interesting upcoming product announcements.
NEC eyes content-aware dedupe
NEC’s HydraStor backup and archiving grid system will soon put a new twist on its block-level dedupe, according to NEC director of product management and technical marketing Gideon Senderov. He said the vendor is working on content-aware deduplication, believing it can lead to better dedupe ratios for customers. “Backup applications insert their own metadata,” he said. “Depending on how they aggregate files, you may have different metadata within them. Similar chunks can sometimes look different.” Filtering application metadata from files requires integration with multiple backup apps. Sepaton’s DeltaStor VTLs already take this approach.
HP has blade plans for LeftHand
LeftHand Networks’ SANiQ IP SAN software will soon be ported to HP’s blade servers, according to Lee Johns, marketing director for entry storage, HP StorageWorks. It’s part of an overall “converged infrastructure” trend for HP, which envisions storage as a network service centrally managed by software. The company is also preparing a software framework, based on its 2007 acquisition of Opsware Inc., to centrally manage different kinds of storage devices along with servers.
Brocade CTO talks FCoE
Brocade’s big announcement at the show was the rollout of its first Fibre Channel over Ethernet (FCoE) products, a top-of-rack switch and a converged network adapter (CNA). The company has taken a less bullish attitude than its rival Cisco to the technology, releasing its top of rack switches several months after Cisco released its Nexus FCoE product line. ,
CTO David Stevens and product manager Pompey Nagra went over the details of the technology with me, as well as its value proposition, which the cynical among us might see as an attempt for FC vendors to stay relevant as 10 GbE threatens to eat their lunch. Stevens pointed out that though FCoE, like 10 GbE, requires a swap-out of switching equipment in the data center, FC storage assets can remain the same. Even though convergence-enhanced Ethernet will bring the protocol more into line with FC as a lossless network with some flow controls, FC offers services like zoning and multipathing that will still be important to storage administrators, Stevens said. He also dismissed the idea that FC will fall out of favor once it’s slower than Ethernet (currently, FC is at 8 Gbps; Ethernet’s looking to move to 10 Gbps). “Maybe long-term,” he said. “But is that big a technology shift for the traditionally risk-averse storage community really worth two more gigs per second?”
Stevens said the focus for FCoE should really be just on cutting down on the number of wires in the data center. “The first-hop technology can all be combined while preserving assets in the infrastructure.”
However, Stevens also admitted this value proposition is similar to what has been promised by InfiniBand technologies, which have yet to see widespread adoption outside of high-performance computing (HPC) niches. Will FCoE be more successful because Ethernet is a more familiar interface than InfiniBand? I asked Stevens. “I don’t have a good answer for you there yet,” he said.
Thales sees converging encryption standards
In February, Thales Group was part of a coalition of vendors that submitted a standard for interoperability between key management systems and encryption devices to the Organization for the Advancement of Structured Information Standards (OASIS) called the Key Management Interoperability Protocol (KMIP). If adopted, KMIP would mean users could attach almost any encrypting device to one preferred key management system, regardless of the vendors involved. Meanwhile, the Institute of Electrical and Electronics Engineers (IEEE) approved a standard in January 2008 for managing encryption on storage devices. Now the vendors are working on bridging between the two standards, according to Kevin Bocek, director of product marketing for Thales, so that if product developers want to roll the more-detailed IEEE spec into the more general OASIS spec, the two will be compatible. This interoperability will probably be more valuable to developers than end users, he said, as the IEEE spec contains very granular details for developing products down to specificying protocols. If engineers don’t have to re-invent the encryption wheel or ensure interoperability for each of their products, it could get products to market faster or free them to focus on other innovations, he said.
SNIA SSD initiative finds ‘wide variability’ in SSD performance
The Storage Networking Industry Association (SNIA) had a few booths set up on the show floor focused on its SNIA Solid-State Storage Initiative (SSSI), including a demo of benchmark comparisons between different vendors’ single-level cell (SLC) enterprise SSDs by Calypso Systems. CTO Easen Ho, on hand for the demonstration, walked me through bar graphs of performance results. The specific manufacturers’ names were not listed (no fun!), but it was easy to see the ‘stairsteps’ between the different results on the graph. Still, as I peered at the screen it looked like they were all generally in the same ballpark. That’s until Ho pointed out to me that the y axis of the graphs was actually a logarithmic scale. This was initially done to better compare the results against spinning disk drives, which otherwise “wouldn’t even be visible on these graphs,” Ho said.
Data Mobility Group analyst and StorageMojo blogger Robin Harris (left) looks for a ‘man on the street’ interview during a chat on the show floor with BlueArc director of corporate marketing Louis Gray.
Here are the news stories filed from balmy Florida this week:
- Primary storage data reduction takes center stage at SNW
- Data Robotics automates RAID 6, thin provisioning for SMBs with DroboPro
- Storage admins mull SSDs at SNW
- Brocade rolls out FCoE switch, adapters
- SNW: DAS makes a comeback as alternative to SAN, NAS
- New Symantec CEO stresses product integration
- Fusion-io plans to add software, SSD-based storage systems
How does a startup get $47.5 million in funding in today’s economy?
“By being a leader in a pretty hot sector,” says David Flynn, CTO and one of the founders of Fusion-io, which closed its whopping Series B round today.
That hot sector is flash technology, which is rapidly making its way into enterprise storage. It also helps that Hewlett-Packard is an OEM partner, using Fusion-io’s ioMemory technology in the HP StorageWorks IO Accelerator NAND flash-based storage adapter. Fusion-io also has technology partnerships with IBM and Dell.
Flynn says the Series B funding will help Fusion-io “broadly address the market that’s growing at a break-neck pace, as well as to help us engineer our next-generation product.”
The first of these next-generation products is the ioSAN, due for release over the summer. Fusion-io describes the PCI Express-based ioSAN as server-deployed network-attached solid state storage.
“We’re working on what we call server-deployed network storage – putting components into the server to share high-performance storage over the network,” Flynn said.
Flynn says more than 400 customers are using Fusion-io cards, most of them large organizations running relational databases, web servers, or virtual machines.
Fusion-io also officially announced David Bradford as its CEO, although that move happened at least a month ago. Bradford spent 15 years as Novell, including three years reporting to current Google CEO Eric Schmidt. Bradford also brought Apple founder Steve Wozniak to Fusion-io as chief scientist a few months back.
Now that “Woz” is no longer dancing with the stars, he can devote more time to Fusion-io. Flynn says while Wozniak’s name recognition has helped Fusion-io generate attention, it wasn’t much of a factor in attracting new VC Lightspeed Venture Partners and getting more funding from Series A investors New Enterprise Associates (NEA), Dell Ventures and Sumitomo Ventures.
“It may help get attention, but don’t think it was substantial in closing the deal,” Flynn said. “I think our OEM relationship with HP, our deep relationship with IBM, the fact that Dell is an investor, and our huge market potential is the real essence of why we got funded.”
Isilon Systems finally took Wall Street’s advice about slashing staff today, revealing it would reduce its worldwide workforce by approximately 10%.
Isilon had 394 employees at the end of 2008. The clustered NAS vendor is also changing its chief of sales, brining in NetApp and Quantum veteran George Bennett to replace Steve Fitz as SVP of worldwide field operations.
Financial analysts have called on Isilon to cut staff since founder Sujal Patel took over as CEO in September of 2007. The company hasn’t had a profitable quarter since going public in 2006.
Patel gave up his resistance to the cuts, and estimated the reduction will cost $850,000 this quarter and then save the company $4 million annually.
“It’s clear that persistent global economic weakness and uncertainty has led to contraction in many of our customers’ IT budgets,” Patel said in a press release today announcing the restructuring.
Isilon actually met sales expectations despite suffering wider losses than expected, according to the preliminary results it disclosed today. Isilon expects revenue in the range of $26.5 million to $27 million, up approximately 10% to 12% from the same period last year and down approximately 15% to 17% from the fourth quarter of 2008. Although Isilon did not give a previous forecast for the quarter, financial analysts expected around $26.9 million in revenue and a loss of 12 cents per share. Isilon said it expects to lose 14 cents to 15 cents per share.
The company also took an inventory writedown of around $3.8 million because of the softening economy for its older products in anticipation of customers moving to its new products launched last month.
“That means there was faster adoption of new product, but they were unable to get rid of the old solution, so that’s a mixed bag,” Enterprise Strategy Group analyst Brian Babineau said.
3PAR’s new F Class midrange systems that launched today look a lot like its T-Class enterprise systems, only smaller. So the F-Class inherits its share of enterprise features as well as the gap in the T-Class platform
The enterprise features includes the ability to scale to four controllers – other midrange systems support two controllers – and what 3PAR calls Mesh-Active controller nodes. Mesh-Active controller architecture provides symmetrical access to all LUNs, instead of connecting a LUN to one controller.
At least one analyst, Evaluator Group managing partner Russ Fellows, gives 3PAR high grades for bringing those features to the midrange.
“By scaling to four controllers, they have twice as many as any midrange class system out there,” he said. “And the system scales literally, with symmetric LUN access. All high-end data center systems support symmetric LUN access across controllers, but pretty much nobody in the midrange does.”
However, 3PAR’s new midrange systems share the same missing feature as its enterprise systems. At a time when just about every new storage system rollout includes solid state drive (SSD) support, 3PAR still has none.
“We will support solid state, but the pressure to do so has been muted compared to what we’ve observed with other storage vendors,” 3PAR VP of markeing Craig Nunes says.
Nunes said when 3PAR gets into SSD, it will be with a SATA interface instead of the pricier FC-attached SSDs.
“If you need higher IOPS, we deliver that today with wide striping,” he said. “Fibre Channel-attached SSDs are premium priced and will never cross over the dollar per IOP line. Wide striping is better. The next wave of SATA-interfaced SSD drives promises to drop that IOP per dollar to the crossover point.”
Although competitors offer SSDs in the midrange as well as the enterprise, Fellows says lack of SSD support hurts 3PAR more with its enterprise systems than with the F-Class. “In the midrange, you add more than six SSDs and you have a million-dollar system, and that’s not a midrange system anymore,” he said. “That will change in a year or two. But it is a bit of a drawback now in the high end.”
Double-Take is consolidating its server-based replication, TimeData CDP, GeoGluster software for Microsoft stretched clusters, LiveWire bare-metal restor, and emBoot’s netboot iSCSI boot-from-SAN and sanFly iSCSI target products into four titles:
Double-Take director of solutions engineering Bob Roudebush, says Double-Take Move and Double-Take Flex are available now. Double-Take Backup and Double-Take Availability will be available by summer.
The long-term roadmap for the products is to add automation. “It might be more desirable somtimes to boot from SAN, and other times to boot locally, but they must be in sync,” Roudebush said. “That’s the vision for sure.”
In the meantime, user Paul Hurst, network administrator for the City of Airdrie in Alberta, Canada, said he’s looking forward to replacing Double-Take’s Virtual Recovery Assistant (VRA) with Double-Take Move. “The key is more efficient migrations [between physical and virtual servers],” he said. “With VRA, you could do one move, test it, and then start all over again. With Double-Take move, I can do a test move without affecting the production box.”
EMCers are talking up their SourceOne archiving platform today, and their rivals at Symantec are doing the same. But while EMC extols the virtues of its EmailXtender replacement, Symantec is giving EmailXtender customers a come-hither look.
In an open letter to EmailXtender customers, Symantec asks: Why go with a version 1 product lacking integrated SharePoint and file archiving support when you can switch to an established product?
The letter promises EMC customers an quick and easy migration to Symantec Enterprise Vault. Enterprise Vault senior product manager Dave Campbell says migration services are available for customers of any archiving product, but obviously EmailXtender users are in the bull’s eye of the target.
“We want to present a turnkey package for migrating from EmailXtender, Zantaz, whatever,” Campbell said. “If you have 2,000 to 5,000 users with two or three years of data in archives, you’re a good candidate for migration services. We’re getting multiple requests each week from customers looking to migrate off legacy systems, and more of those requests are from EmailXtender customers than usual.”
The migration services include a system Healtcheck to identify best practices as well as potential failures and errors, and an architectural assessment to understand what is archived and insure a proper chain of custody for archived data. Campbell says most of the migration can be done remotely by Symantec Global Services.
The migration services are not free, however, and Symantec isn’t promising discounts to get EMC customers to switch. Campbell says the pricing depends on how much information is in the legacy archive.
Bottom line: Symantec is trying to get at SourceOne in the crib before it gets a chance to grow up.
As we reported yesterday, Aptare Inc. upgraded and expanded its StorageConsole storage resource management (SRM) suite this week, adding products to manage resources in VMware virtual server and NetApp replication environments.
As a sidebar to that story, though, I also had a pretty interesting conversation with the CEO of Aptare, Rick Clark, about how business has been during the global economic crisis. The consensus among analysts in this market has been that while the need for better capacity management is real, organizations have generally not been willing to pay for it. However, Clark said Aptare has more than 300 customers, and that sales the last three quarters have been the strongest in the company’s history.
For more on this topic, check out our podcast of the interview with Clark about how the company has grown in recent months: