NetApp has been on the radar screen more than usual this week, with the announcement of its rebranding campaign followed by its Analyst Day 24 hourse later.
When I’ve written about NetApp over the past few months, I’ve received feedback from its customers talking about the discrepancies between what NetApp’s claiming and what they’re seeing in their shops. A while back I heard from someone using one of NetApp’s arrays who said he felt there had been a difference in what his NetApp sales rep told him and what actually happened after he installed the machine, specifically when it came to Fibre Channel LUNs and snapshots.
According to NetApp, its most current official best practices state that LUNs have the same snapshot overhead as other data on FAS systems, an estimated 20%. But in the course of reporting on Analyst Day, I quoted a different user in the article talking about how he’s seen overhead issues with LUNs. This user’s understanding is also that the best practice is 100% overhead for snapshots of that data.
Another comment in my Analyst Day story suggested high-end customers don’t view NetApp’s boxes as matching the reliability of Tier 1 based arrays. Val Bercovici, director of competitive sales for NetApp, said that attitude is outdated because it doesn’t take into account the vendor’s more recent focus on higher-end storage. As he put it, it represents a “very rear-view mirror view of NetApp.” But, he conceded that this is exactly why NetApp is trying to change its messaging.
But messaging can be a big part of the problem when there is a discrepancy between what the sales rep claims and what the actual hands-on engineer wants to configure. Who knows more about how the machine is actually going to perform, a sales or marketing rep or an engineer doing the installation?
The user I quoted in the Analyst Day story, Tom Becchetti, says he was told about the 100% overhead best practice at a training class last fall, and concedes the instructor could have been going on outdated information. But if that’s the case, the lack of up-to-date information in the field about NetApp products are as big a problem as the overhead itself.
I’m hoping there are other NetApp end users floating around who can weigh in on this. What has been your experience with NetApp products? What has been your understanding of NetApp’s best practices? What would you like to see NetApp doing as it tries to reinvent itself?
The other day I blogged about an update to the “Digital Universe” report EMC sponsored with IDC, which amended estimates of the size of said digital universe upward.
Today while surfing around I saw EMC blogger Chuck Hollis’s post on the report, which contained an intriguing tidbit:
By the way, there’s some new bling for your PC. Last year, as part of the study, EMC offered up a “digital clock” that attempted to measure all information produced in aggregate.
This year, there’s a “personal digital clock” that (after answering a few questions) will estimate just how much digital footprint you’re creating: both directly and indirectly. It’s a bit humbling.
As an example, the personal clock estimates that I’ve created well over a terabyte of “digital shadow” this year so far. And that’s not even counting these blog posts!
Just doing my part for the storage industry, I guess…
I was definitely interested in finding out the exact dimensions of my digital shadow (proven fact: self-absorption is a key driver of Internet traffic), so I downloaded the mini-application they’ve put together with IDC to calculate one’s digital footprint.
It asked a series of questions about surfing habits, the amount of minutes you spend on the phone per week, the amount of TV you record on your TiVo, that sort of thing. I was actually a little embarrassed at some of the numbers I put in–some of them were high indeed, especially the ‘hours per week you are actively on the Internet’ one.
Once I’d answered the questionnaire, the applicaton calculated that I generate 6.18 GB of personal digital information per day, meaning that this year I will generate 2.25 TB of digital shadow.
Hollis, meanwhile, writes that he’s already generated over 1 TB this year. Today, March 13, is the 72nd day of 2008, putting him at about 13 GB per day, if my calculations are accurate. Given I spend virtually all of my working hours actively on the internet and estimated around five or so hours per day on the phone if you combine cell and land line usage, plus a hard drive partition bulging with over 8,000 digital photos … I have to wonder just what Hollis is doing to generate such a shadow.
You too can find out how much you’re contributing daily to the storage industry via the mini-app, which is posted for download here.
QLogic this week became the third vendor to claim it is the first to ship 8 Gbit/sg Fibre Channel equipment.
QLogic says its 8Gb PCI-Express HBAs and 8-gig switches are available as a Hewlett-Packard StorageWorks 8Gb Simple SAN Connection Kit and from QLogic distributors. Although other vendors claim to have 8-gig devices, QLogic marketing vice president Frank Berry said his rivals aren’t shipping those products yet. That’s news to Brocade and Emulex. IBM, Sun and NetApp have said they are offering Brocade’s 8-gig DCX Backbone director, and Emulex lists Ingram Micro and TechData among the distributors selling its 8-gig HBAs.
But QLogic is the first vendor to offer up a real live 8-gig user. Managed hosting services firm InteleNet Communications has been testing QLogic 8-gig HBAs and switches , and general manager Carlos Oliviera expects to be an early adopter. InteleNet provides storage, security, networking, data backup and disaster recovery services out of a 55,000 square foot data center located in Irvine, Calif., and a smaller data center in Denver.
Oliviera said 8-gig Fibre Channel gear will help InteleNet provide better service for its customers in several ways.
“Our machines are diskless,” Oliviera said. “We started testing 8-gig equipment to enhance the speed of data transfers. We have customers with a high demand for utilization; they need to open several applications on the same machines. They need high I/O.
“And we’re seeing a lot of demand for disaster recovery where they replicate content across different disk controllers over the SAN, and they want to get that done as fast as possible.”
Besides the performance boost, Oliviera said 8-gig lets him connect more actual and virtual servers to his storage and adds redundancy. He expects to add 8-gig to his production system by mid-year when InteleNet installs its next 50-server rack.
InteleNet is probably the exception at this point for seeing value in 8-gig. Not even the storage vendors expect customers to move to 8-gig as fast as they went from 2- to 4-gig a few years back. The 8-gig HBAs and switches will cost about 15 percent more than 4-gig gear at the start. And the 8-gig ecosystem will take longer to develop. No system vendors have disclosed plans for 8-gig systems yet, and hard drive vendors probably won’t ever develop 8-gig Fibre Channel drives.
But Oliviera said the more expensive 8-gig gear makes sense for him because it lets his company add revenue through new customers. “The return on investment is still very good,” he said. “One of things that pushed this is virtualization. Now we can sell a lot more serivces with the same resources, which gives us a better ROI.”
Last week I was briefed by Internet security software company Trend Micro on its new email archive offering, dubbed the Trend Micro Message Archiver, which was launched Monday.
The product, from a storage geek’s point of view, is about as bleeding-edge as its name. It has the usual checklist items we’ve been hearing about from earlier arrivals to this market, from indexing to .pst import. The product also does MD5 hashing for content-addressed storage, etc. At some points it feels like the email archiving players have all seen a Chinese menu somewhere, and they pick and choose certain features. There’s a superset of common product features so ubiquitous in that market it’s begun to feel commoditized.
What captured my attention when it came to TMMA isn’t the product but who’s offering it. Trend Micro is a 20-year-old, global, $848 million-a-year company. Since 2004, MSN Hotmail has been using Trend Micro to scan messages and attachments in its users’ accounts.
The first thing this means is that the product will be integrated as it matures with TM’s access controls, anti-spam and anti-virus filters, email certification and encryption features. Trend Micro’s not alone in this kind of integration (Lucid8 and others jump to mind), but they are pretty unique in terms of their size and brand recognition. And the times I’ve stepped out of my little storage-centric cave and spoken with people in adjacent markets–like, say, the e-discovery and legal compliance folks–I’ve heard many of them say that the storage guys aren’t getting it in some areas, like evidentiary standards that may apply to emails in court beyond what most email archivers offer today. It might be that a little expertise from other markets is what these products need.
This also might be where this new wave of non-storage vendors like Trend Micro making forays into the storage market will find a way to add value. For security-concerned customers, the TM product could offer a focus on security integration, delivery from an already-trusted vendor, and the ever-popular ‘one throat to choke’ as well.
But then again, the ultimate purpose of the product is to store and protect email data. The security features are nice, but secondary to the main function of the product. And many storage admins would probably rather go with a vendor that has experience in the core feature of the product, which is data protection.
I’m also seeing this dichotomy emerge in another hot market–storage SaaS. In that market, there are also new offerings from experienced storage players competing against new ‘one stop shop’ offerings from adjacent players–EMC’s Fortress vs. new backup and hosted storage offerings from data center service providers like The Planet and Savvis.
I, for one, am curious to see which model users will find preferable as overlaps grow between the different disciplines of IT. Which will be more important: focus, as in focus on the existing relationship with the customer and consolidated vendor relationships, or experience, in designing and supporting storage products?
Our friends at Homeland Security are known to use the term “chatter” to refer to the level of terrorist communications they’re intercepting at any given time. Any large consortium of humans will have its own chatter, with its own quirky patterns and trends, and the storage market is no exception.
Right now, in the wake of VMworld Europe, there’s quite a bit of chatter going on about developing conflicts between VMware and its partners, especially in storage.
Still, where earlier discussions on this blog have been purely speculative, some new articles and posts have surfaced that push the observation further toward reality. There are those who continue to pooh-pooh the idea, but Burton Group analyst Chris Wolf, whom I interviewed for the post linked above, came away from discussions with partners at VMworld Europe seeing Storage VMotion as more disruptive to VMware’s alliances than ever:
…it did not take long for me to realize that storage vendors were not exactly singing Storage VMotion’s praises. Instead, many storage vendors were still feeling Storage VMotion’s sting. Why should they care about a new storage value-add in ESX 3.5? Vendors that offer storage virtualization as an integral part of their products have seen one of their key value-adds move to the ESX hypervisor and as a result see Storage VMotion as a threat to their bottom line.
Then, our SearchITChannelcom sister site published an article today about how channel partners, too, are feeling conflicted over VMware:
Some VMware partners are blaming the company’s rapid ascent and aggressive strategy in the server virtualization market for creating channel conflict.
VMware’s strategy, according to these value-added resellers (VARs) and independent software vendors (ISVs), has been to fill gaps in its market coverage by acquiring partners in those specific segments and integrating new technologies into its hypervisor. And the more niche markets the vendor enters, the more competition it creates with its partners.
“They have a go-it-alone approach,” said Erik Josowitz, vice president of product strategy for Surgient Inc., a VMware ISV partner in Austin, Texas. “They’re predatory in a certain sense.”
News writer Colin Steele, who reported that story, has some further tidbits on his blog as well (though he brings up the Patriots’ 18-1 season in that post, which stings a little for yours truly).
It’s still not much more than a matter of opinion and conjecture at this point, but it looks to me like when it comes to VMware and its partners, especially in the storage market, the plot is thickening.
EMC and IDC have published an update to their Digital Universe report, which met with skepticism when it was originally published last March. We’re generally skeptical of vendor-sponsored analyst reports around here, too, but there was one data point that jumped out at me in this report: over the next three years, 70% of information will be created by individuals but 85% of it will be managed by corporations.
Even more interestingly, the report says the majority of that data created by individuals won’t be created consciously. We are sprouting digital “shadows” such as credit card numbers, bank records, health records, etc., which are increasingly used to identify us and conduct business in the modern economy.
So, to review, in the future, 70% of the information EMC makes money storing will be yours, but it probably won’t be controlled or even consciously generated by you.
EMC’s message around this report is that businesses are going to need to be more aware of this personal digital information, because it’s going to put strain on their storage systems, and also because given the statistics above, individuals are going to be counting on businesses to store and manage their information in a way that preserves privacy and the integrity of the data.
Even where corporate storage managers aren’t directly in the business of information retention for consumers, virtually everyone is going to have to worry about data “hygiene” with the increasing blend of business and personal information on portable devices such as laptops and PDAs. This is something my Storage Soup colleague Tory Skyers has thought and spoken a lot about, including some presentations at Storage Decisions, and it’s still a problem without a clear solution for many in corporate IT.
For users already struggling with that issue, the EMC / IDC report has some further bad news:
- The digital universe in 2007 – at 2.25 x 1021 bits (281 exabytes or 281 billion gigabytes) – was 10% bigger than we thought. The resizing comes as a result of faster growth in cameras, digital TV shipments, and better understanding of information replication.
- By 2011, the digital universe will be 10 times the size it was in 2006.
- As forecast, the amount of information created, captured, or replicated exceeded available storage for the first time in 2007. Not all information created and transmitted gets stored, but by 2011, almost half of the digital universe will not have a permanent home.
My fellow children of the 80’s will remember an anti-smoking PSA by C3PO and R2D2 from back in the day that ends with plaintive line from C3PO, “R2? Do you really think I don’t have a heart?”
That image jumped to mind when I heard about NetApp’s rebranding efforts, which include a new advertising campaign with a disembodied human heart as the central image.
In addition to the heart imagery, NetApp has been a little like C3PO in other ways. According to its VP of corporate marketing Elisa Steele, the company decided to change its branding, name and image to try and break out of its “technical company” mold, which at this stage of its growth is holding it back from wider adoption. The company means to maintain its technical culture, but otherwise seems to want to become more than, well, an android.
I’ve been fielding quite a few requests for legal holds recently, and I’ve been tracking the storage used by legal holds on our SAN and tape library. Out of curiosity, I started doing research on the average length of a trial, then tabulating the cost of storing the data requested on WORM for that time.
Guess what I’ve found?Some trials last a loooooooong time, and the costs are not insignificant. Now I see why Beth has been ringing the alarm about FRCP.
My company has been very lucky — we have a great risk and legal team as well as solid policy. But people will still sue if you have a business address. The incidental cost of keeping someone’s mailbox around for five years or so while they litigate (then appeal when they lose) is high, but can a company afford not to do so? What happens when you can’t produce an email to back up your side of a dispute? Worse still, what if the other side accuses you of damaging their case by not providing them with the emails they’ve requested?
There’s a “Safe Harbor” clause in the FRCP that absolves companies of responsibility if the company has — and strictly follows — a deletion and retention policy. This protects the company from falling afoul of the regulation, but does my act (as an end user) of deleting an email fall under the “Safe Harbor” clause?
Let me put on my lawyer hat. Okay, it’s on. I’ve seen some precedent that leads me to believe that simply having and following a policy is not enough. Say that, as a network administrator, I have a policy that strictly prohibits viewing pornography on a company network. I can communicate the policy, but if I don’t have measures in place to actively block pornography or follow up complaints about it, I may leave myself open to suit. Some of you may be thinking, “Why would you have a rule that you can’t look at pornography and not have a content filter in place?” My point exactly: Why have a deletion and retention policy, and allow people to do their own deleting and retaining?
This is going to get very esoteric and confusing (as many of our laws are), but what I took away from this article was this: If you allow me to do something, you may be implicitly approving of the behavior. Not to mention that while the employee viewing the pornography is breaking the rules and doesn’t have a case against me, what about the person walking by their terminal who sees it against their will?
So as it relates to e-discovery, if you allow me to delete my own emails, are you implicitly approving of me disobeying retention and deletion policy?
I started thinking about this a little deeper (which almost always spells trouble) and technically, it seems like I would have to have CDP in place and store every email entering and leaving every mailbox forever to be really covered against every contingency. Suppose I’m an end-user, and I delete an incriminating email, but then sue and claim I need the email to prove my case, and that you should have that email available. . .BUT my mailbox wasn’t backed up before I deleted the message. Are you, the respondent, still in hot water?
Implications abound here. Will SMBs that fall under some form of regulation — SOX, HIPAA, etc. — have to store every email forever? I’d love some readers to weigh in on this. Have any of you out there fought this battle with management? Do you know of any vendors that have products that address this particular issue?
I’m curious as to how deep this particular rabbit hole goes and how many folks have been forced to follow it to its logical end. Is there a crazy playing card there yelling “Off with their heads!!”?
It doesn’t necessarily have anything to do with storage, but I got a chuckle out of this story from Reuters UK today about a lawsuit filed against IBM:
A small Japanese bank has slapped International Business Machines Corp with a $107 million lawsuit, saying the technology giant failed to properly deliver on a computer deal.
Suruga Bank, based in Shizuoka Prefecture west of Tokyo, hired IBM in 2004 to help overhaul its computer system, but later baulked [sic] at the proposed changes.
“We are suing because we decided it would be difficult to implement the system they suggested,” a spokesman for the bank said.
Kind of a dangerous precedent for storage vendors if this suit is successful, don’t you think?
Two storage-related announcements came out of CeBIT this week that have turned a few heads.
The first is the FlashBack Adapter from thumb-drive king SanDisk. The device fits into the ExpressCard slot of a user’s PC, and automatically and continuously backs up and encrypts data onto a flash memory card. This way, to quote SanDisk, when “you’re at a conference and someone spills coffee on your laptop PC, shorting out the system and cutting you off from your presentation and notes. Or your computer slips out of your hands and crashes to the floor,” you can extract the memory card from the smoking wreckage, find another PC and be on your way.
The second announcement comes from a UK company called Retrodata, which is reportedly getting ready to release a do-it-yourself drive recovery system. The beast, which has yet to be photographed, reportedly weighs 75 kg (165 lbs.) and will be priced at around $7000. But for all you Austin Powers fans out there, it does come equipped with…”lasers”.
According to techchee, a blog dedicated to high-tech products:
The device uses laser-guided positioning to help it accurately extract platters from any 3.5 inch hard drive with minimal user intervention. What’s unusual element is that such devices normally require highly skilled operators, whereas the System P. EX can be used by a relative novice at a data recovery company.
Maybe if Retrodata plays its cards right, it’ll get an order for…one million dollars.