As the first vendor to make data deduplication a key piece of the backup picture, Data Domain has benefitted most from the dedupe craze. And now it has the most at stake when deduplication becomes mainstream. If all the major storage vendors offer deduplication, there goes at least part of Data Domain’s edge.
That’s not lost on Data Domain CEO Frank Slootman. He sees NetApp’s decision to build deduplication into its operating system and use it for primary data, and the move by other large disk and tape vendors to put dedupe into their virtual tape libraries as part of a strategy to marginalize the technology.
“NetApp’s and EMC’s fundamental strategy is to make deduplication go away as a separate technology,” Slootman said “NetApp has been giving away their deduplication, and we think EMC [through an OEM deal with Quantum] will fully charge for storage but give away dedupe. They don’t want dedupe to be a separate business, or even a technology in its own right.”
Slootman says he’s not worried, though. He sees the biggest benefit of deduplication as an alternative to VTLs, and claims many new Data Domain customers use deduplication to replace virtual tape rather than enhance it. He calls deduplication for VTLs a “bolt-on” technology, where Data Domain built its appliances specifically for dedupe.
And he maintains that deduplication doesn’t work for primary storage. It’s not a technical issue, but a strategic one.
“Primary data lives for short periods of time, why dedupe that?” he said. “It doesn’t live long enough to get any benefit to reducing its size. If data doesn’t mutate, it should be spun off primary storage anyway. It should go to cheaper storage. It’s the stuff that doesn’t change that mounts a huge challenge for data centers. You can’t throw it away, and it’s expensive to keep online.”
EMC’s Centera has been something of a question mark for many in the industry over the last 6-8 months. Rumors seem to continually swirl around a major overhaul or replacement for the first content-addressed storage (CAS) system to hit the market. Those rumors and speculation persist even after hardware and software refreshes, such as the introduction of CentraStar 4.0 software last week, and despite insistence from EMC officials that no further major overhauls to the system are planned.
So far Centera remains the leader in market share and the best-known CAS product in the industry, but as we all know, the archiving market is heating up like never before right now, and other big competitors like Hewlett-Packard and Hitchi Data Systems have been refreshing archiving systems to compete better, to say nothing of archiving startups (or re-starts) popping up like mushrooms all over the industry.
Today, in an interesting twist, one of those startups, Caringo, revealed that Centera’s director of technology, Jan Van Riel, has left EMC to be Caringo’s VP of Advanced Technology.
Execs leave EMC all the time, often for positions of higher responsibility at newer companies. But there’s a tangled, shared history between these players in particular. The founders of Caringo were also among the co-founders of FilePool, which became Centera when EMC acquired it in 2001. Van Riel was the CTO of FilePool prior to joining EMC as part of the acquisition.
Caringo’s CAS uses standard CIFS and NFS protocols to ingest data, rather than a proprietary API as Centera does. Caringo’s product can run on clusters of virtually any kind of hardware (one example they showed me was the software running on a Mac external drive). With this product, they find themselves in the strange position of launching attacks against what they view as the proprietary, hardware-bound nature of a competitive product that they themselves created.
Who knows if it really means anything that Van Riel has joined with his old buddies again, but he also made a public statement critical of EMC in the press release Caringo put out announcing the move: “With EMC scaling down the Centera unit and the future of Centera unclear, the chance to join Caringo, which understands the potential of CAS, and partner once again with Paul Carpentier was too good of an opportunity to pass up.”
The plot thickens…
QLogic opened last week by declaring it has 8-Gbit/s HBAs and switches available, and ended the week by quietly disclosing the resignation of its president and COO Jeff Benck.
The connection between the two pieces of news isn’t yet clear, but until Friday both 8-gig and Benck were considered keys to QLogic’s future.
Former IBM exec Benck has been considered the likely successor to CEO H.K. Desai since joining QLogic last May. He has been the vendor’s front man with Wall Street as well as in the storage industry with 8-gig and the emerging Fibre Channel over Ethernet (FCoE) protocol. Benck was quoted in QLogic’s 8-gig press release last Monday, and he even did an interview on QLogic’s strategy later in the week with the Orange County Business Journal Online for a story that appeared today.
QLogic did not even issue a press release on Benck’s resignation. It filed a notice with the SEC on Friday afternoon – a time when companies often release news they want to slip by with little scrutiny.
“This is quite a shock,” a financial analyst who covers QLogic said in an email to SearchStorage.com after hearing the news.
A source with knowledge of QLogic said Desai told Benck he would not renew his contract, which expires May 1. The move comes as QLogic and its long-time HBA rival Emulex battle to get a leg up on 8-gig and FCoE, and as switch vendor Brocade makes its move into the HBA space. This is an important time for storage networking vendors as they prepare for a coming convergence between Ethernet and Fibre Channel networks.
Wall Street analysts take it as a bad sign for QLogic, which is also without a CFO following Tony Massetti’s resignation last November to become CFO at NCR.
Kaushik Roy of Pacific Growth Equities LLC described Benck as “sharp and pretty engaged. I don’t get the idea that Jeff was incompetent,” Roy said. “There must have been serious disagreements between him and H.K. They’re still searching for a CFO. All these things don’t bode well for the company. It is clear that QLogic is having some serious strategic issues that are extremely hard to overcome.”
Aaron Rakers of Wachovia expressed a similar opinion in a note to clients today.
“We view this as a negative for QLogic,” Rakers wrote, adding that Benck was seen as Desai’s probable successor. “This announcement likely puts increased strain on QLogic’s executive management team, which has been in the process of looking for a CFO.”
Benck’s resignation raises issues regarding QLogic’s future technology direction, as well as the company’s fate. There has been speculation for several years that Cisco would acquire QLogic. Cisco resold QLogic Fibre Channel switches before developing its own fabric switches, the vendors are working together on FCoE, and QLogic has pieces that Cisco lacks for the coming converged network architecture.
But Roy, who was quoted in the Orange County Business Journal story saying he “would not be surprised” if Cisco buys QLogic and Brocade acquires Emulex, said Benck’s departure probably means no deal is imminent. “Why would Jeff leave before the acquisition?” he said. “It would make more sense for him to stick around until after the deal.”
The storage market is a vibrant one right now, and as social networking concepts like blogs become more popular in the corporate world, the storage industry has a lively, varied blogosphere to match. Below are examples of some of the more interesting commentary I’ve seen lately, in case you missed them.
NetApp has been on the radar screen more than usual this week, with the announcement of its rebranding campaign followed by its Analyst Day 24 hourse later.
When I’ve written about NetApp over the past few months, I’ve received feedback from its customers talking about the discrepancies between what NetApp’s claiming and what they’re seeing in their shops. A while back I heard from someone using one of NetApp’s arrays who said he felt there had been a difference in what his NetApp sales rep told him and what actually happened after he installed the machine, specifically when it came to Fibre Channel LUNs and snapshots.
According to NetApp, its most current official best practices state that LUNs have the same snapshot overhead as other data on FAS systems, an estimated 20%. But in the course of reporting on Analyst Day, I quoted a different user in the article talking about how he’s seen overhead issues with LUNs. This user’s understanding is also that the best practice is 100% overhead for snapshots of that data.
Another comment in my Analyst Day story suggested high-end customers don’t view NetApp’s boxes as matching the reliability of Tier 1 based arrays. Val Bercovici, director of competitive sales for NetApp, said that attitude is outdated because it doesn’t take into account the vendor’s more recent focus on higher-end storage. As he put it, it represents a “very rear-view mirror view of NetApp.” But, he conceded that this is exactly why NetApp is trying to change its messaging.
But messaging can be a big part of the problem when there is a discrepancy between what the sales rep claims and what the actual hands-on engineer wants to configure. Who knows more about how the machine is actually going to perform, a sales or marketing rep or an engineer doing the installation?
The user I quoted in the Analyst Day story, Tom Becchetti, says he was told about the 100% overhead best practice at a training class last fall, and concedes the instructor could have been going on outdated information. But if that’s the case, the lack of up-to-date information in the field about NetApp products are as big a problem as the overhead itself.
I’m hoping there are other NetApp end users floating around who can weigh in on this. What has been your experience with NetApp products? What has been your understanding of NetApp’s best practices? What would you like to see NetApp doing as it tries to reinvent itself?
The other day I blogged about an update to the “Digital Universe” report EMC sponsored with IDC, which amended estimates of the size of said digital universe upward.
Today while surfing around I saw EMC blogger Chuck Hollis’s post on the report, which contained an intriguing tidbit:
By the way, there’s some new bling for your PC. Last year, as part of the study, EMC offered up a “digital clock” that attempted to measure all information produced in aggregate.
This year, there’s a “personal digital clock” that (after answering a few questions) will estimate just how much digital footprint you’re creating: both directly and indirectly. It’s a bit humbling.
As an example, the personal clock estimates that I’ve created well over a terabyte of “digital shadow” this year so far. And that’s not even counting these blog posts!
Just doing my part for the storage industry, I guess…
I was definitely interested in finding out the exact dimensions of my digital shadow (proven fact: self-absorption is a key driver of Internet traffic), so I downloaded the mini-application they’ve put together with IDC to calculate one’s digital footprint.
It asked a series of questions about surfing habits, the amount of minutes you spend on the phone per week, the amount of TV you record on your TiVo, that sort of thing. I was actually a little embarrassed at some of the numbers I put in–some of them were high indeed, especially the ‘hours per week you are actively on the Internet’ one.
Once I’d answered the questionnaire, the applicaton calculated that I generate 6.18 GB of personal digital information per day, meaning that this year I will generate 2.25 TB of digital shadow.
Hollis, meanwhile, writes that he’s already generated over 1 TB this year. Today, March 13, is the 72nd day of 2008, putting him at about 13 GB per day, if my calculations are accurate. Given I spend virtually all of my working hours actively on the internet and estimated around five or so hours per day on the phone if you combine cell and land line usage, plus a hard drive partition bulging with over 8,000 digital photos … I have to wonder just what Hollis is doing to generate such a shadow.
You too can find out how much you’re contributing daily to the storage industry via the mini-app, which is posted for download here.
QLogic this week became the third vendor to claim it is the first to ship 8 Gbit/sg Fibre Channel equipment.
QLogic says its 8Gb PCI-Express HBAs and 8-gig switches are available as a Hewlett-Packard StorageWorks 8Gb Simple SAN Connection Kit and from QLogic distributors. Although other vendors claim to have 8-gig devices, QLogic marketing vice president Frank Berry said his rivals aren’t shipping those products yet. That’s news to Brocade and Emulex. IBM, Sun and NetApp have said they are offering Brocade’s 8-gig DCX Backbone director, and Emulex lists Ingram Micro and TechData among the distributors selling its 8-gig HBAs.
But QLogic is the first vendor to offer up a real live 8-gig user. Managed hosting services firm InteleNet Communications has been testing QLogic 8-gig HBAs and switches , and general manager Carlos Oliviera expects to be an early adopter. InteleNet provides storage, security, networking, data backup and disaster recovery services out of a 55,000 square foot data center located in Irvine, Calif., and a smaller data center in Denver.
Oliviera said 8-gig Fibre Channel gear will help InteleNet provide better service for its customers in several ways.
“Our machines are diskless,” Oliviera said. “We started testing 8-gig equipment to enhance the speed of data transfers. We have customers with a high demand for utilization; they need to open several applications on the same machines. They need high I/O.
“And we’re seeing a lot of demand for disaster recovery where they replicate content across different disk controllers over the SAN, and they want to get that done as fast as possible.”
Besides the performance boost, Oliviera said 8-gig lets him connect more actual and virtual servers to his storage and adds redundancy. He expects to add 8-gig to his production system by mid-year when InteleNet installs its next 50-server rack.
InteleNet is probably the exception at this point for seeing value in 8-gig. Not even the storage vendors expect customers to move to 8-gig as fast as they went from 2- to 4-gig a few years back. The 8-gig HBAs and switches will cost about 15 percent more than 4-gig gear at the start. And the 8-gig ecosystem will take longer to develop. No system vendors have disclosed plans for 8-gig systems yet, and hard drive vendors probably won’t ever develop 8-gig Fibre Channel drives.
But Oliviera said the more expensive 8-gig gear makes sense for him because it lets his company add revenue through new customers. “The return on investment is still very good,” he said. “One of things that pushed this is virtualization. Now we can sell a lot more serivces with the same resources, which gives us a better ROI.”
Last week I was briefed by Internet security software company Trend Micro on its new email archive offering, dubbed the Trend Micro Message Archiver, which was launched Monday.
The product, from a storage geek’s point of view, is about as bleeding-edge as its name. It has the usual checklist items we’ve been hearing about from earlier arrivals to this market, from indexing to .pst import. The product also does MD5 hashing for content-addressed storage, etc. At some points it feels like the email archiving players have all seen a Chinese menu somewhere, and they pick and choose certain features. There’s a superset of common product features so ubiquitous in that market it’s begun to feel commoditized.
What captured my attention when it came to TMMA isn’t the product but who’s offering it. Trend Micro is a 20-year-old, global, $848 million-a-year company. Since 2004, MSN Hotmail has been using Trend Micro to scan messages and attachments in its users’ accounts.
The first thing this means is that the product will be integrated as it matures with TM’s access controls, anti-spam and anti-virus filters, email certification and encryption features. Trend Micro’s not alone in this kind of integration (Lucid8 and others jump to mind), but they are pretty unique in terms of their size and brand recognition. And the times I’ve stepped out of my little storage-centric cave and spoken with people in adjacent markets–like, say, the e-discovery and legal compliance folks–I’ve heard many of them say that the storage guys aren’t getting it in some areas, like evidentiary standards that may apply to emails in court beyond what most email archivers offer today. It might be that a little expertise from other markets is what these products need.
This also might be where this new wave of non-storage vendors like Trend Micro making forays into the storage market will find a way to add value. For security-concerned customers, the TM product could offer a focus on security integration, delivery from an already-trusted vendor, and the ever-popular ‘one throat to choke’ as well.
But then again, the ultimate purpose of the product is to store and protect email data. The security features are nice, but secondary to the main function of the product. And many storage admins would probably rather go with a vendor that has experience in the core feature of the product, which is data protection.
I’m also seeing this dichotomy emerge in another hot market–storage SaaS. In that market, there are also new offerings from experienced storage players competing against new ‘one stop shop’ offerings from adjacent players–EMC’s Fortress vs. new backup and hosted storage offerings from data center service providers like The Planet and Savvis.
I, for one, am curious to see which model users will find preferable as overlaps grow between the different disciplines of IT. Which will be more important: focus, as in focus on the existing relationship with the customer and consolidated vendor relationships, or experience, in designing and supporting storage products?
Our friends at Homeland Security are known to use the term “chatter” to refer to the level of terrorist communications they’re intercepting at any given time. Any large consortium of humans will have its own chatter, with its own quirky patterns and trends, and the storage market is no exception.
Right now, in the wake of VMworld Europe, there’s quite a bit of chatter going on about developing conflicts between VMware and its partners, especially in storage.
Still, where earlier discussions on this blog have been purely speculative, some new articles and posts have surfaced that push the observation further toward reality. There are those who continue to pooh-pooh the idea, but Burton Group analyst Chris Wolf, whom I interviewed for the post linked above, came away from discussions with partners at VMworld Europe seeing Storage VMotion as more disruptive to VMware’s alliances than ever:
…it did not take long for me to realize that storage vendors were not exactly singing Storage VMotion’s praises. Instead, many storage vendors were still feeling Storage VMotion’s sting. Why should they care about a new storage value-add in ESX 3.5? Vendors that offer storage virtualization as an integral part of their products have seen one of their key value-adds move to the ESX hypervisor and as a result see Storage VMotion as a threat to their bottom line.
Then, our SearchITChannelcom sister site published an article today about how channel partners, too, are feeling conflicted over VMware:
Some VMware partners are blaming the company’s rapid ascent and aggressive strategy in the server virtualization market for creating channel conflict.
VMware’s strategy, according to these value-added resellers (VARs) and independent software vendors (ISVs), has been to fill gaps in its market coverage by acquiring partners in those specific segments and integrating new technologies into its hypervisor. And the more niche markets the vendor enters, the more competition it creates with its partners.
“They have a go-it-alone approach,” said Erik Josowitz, vice president of product strategy for Surgient Inc., a VMware ISV partner in Austin, Texas. “They’re predatory in a certain sense.”
News writer Colin Steele, who reported that story, has some further tidbits on his blog as well (though he brings up the Patriots’ 18-1 season in that post, which stings a little for yours truly).
It’s still not much more than a matter of opinion and conjecture at this point, but it looks to me like when it comes to VMware and its partners, especially in the storage market, the plot is thickening.
EMC and IDC have published an update to their Digital Universe report, which met with skepticism when it was originally published last March. We’re generally skeptical of vendor-sponsored analyst reports around here, too, but there was one data point that jumped out at me in this report: over the next three years, 70% of information will be created by individuals but 85% of it will be managed by corporations.
Even more interestingly, the report says the majority of that data created by individuals won’t be created consciously. We are sprouting digital “shadows” such as credit card numbers, bank records, health records, etc., which are increasingly used to identify us and conduct business in the modern economy.
So, to review, in the future, 70% of the information EMC makes money storing will be yours, but it probably won’t be controlled or even consciously generated by you.
EMC’s message around this report is that businesses are going to need to be more aware of this personal digital information, because it’s going to put strain on their storage systems, and also because given the statistics above, individuals are going to be counting on businesses to store and manage their information in a way that preserves privacy and the integrity of the data.
Even where corporate storage managers aren’t directly in the business of information retention for consumers, virtually everyone is going to have to worry about data “hygiene” with the increasing blend of business and personal information on portable devices such as laptops and PDAs. This is something my Storage Soup colleague Tory Skyers has thought and spoken a lot about, including some presentations at Storage Decisions, and it’s still a problem without a clear solution for many in corporate IT.
For users already struggling with that issue, the EMC / IDC report has some further bad news:
- The digital universe in 2007 – at 2.25 x 1021 bits (281 exabytes or 281 billion gigabytes) – was 10% bigger than we thought. The resizing comes as a result of faster growth in cameras, digital TV shipments, and better understanding of information replication.
- By 2011, the digital universe will be 10 times the size it was in 2006.
- As forecast, the amount of information created, captured, or replicated exceeded available storage for the first time in 2007. Not all information created and transmitted gets stored, but by 2011, almost half of the digital universe will not have a permanent home.