Some people will do anything to avoid data migration.
You can’t inhabit the storage industry for longer than a week without becoming intimately acquainted with how painful data migration is. Whole product categories have sprung from this pain.
But…still…as an application and internet service provider, you might have hoped Massachusetts-based NaviSite would have had a more foresight than to consolidate data centers by packing servers onto trucks owned by a recently acquired company and shipping them up to Boston by truck. In the process, according to a Boston.com account of the chaos that ensued, the company “cut off Internet service to thousands of individuals and small businesses across the country for nearly a week.”
One Boston blog made a frank assessment of the situation:
…instead of doing it the right way (buying new servers, getting them running in Andover, then using that wacky thing called the Internet to move all the data from Baltimore to Andover), they did it the majorly wrong way.
For most of our audience, there’s at least one comforting lesson to be taken from this story: you could be having a pretty bad day, but at least you’re not the guy who made that decision.
ETA: Speaking of roads and data. We have to take our storage humor where we can get it, so I also feel the need to share with you the line from an acquaintance of mine when he heard about Fujifilm’s new GPS tape tracking unit: “I wonder what a tape falling off the back of a truck and bouncing down the road looks like on a GPS system.”
“Boss”. Photo courtesy of TartanRacing.org.
Nope, nobody slipped anything into my morning coffee–this did actually happen. NetApp was a member of the Tartan Racing Team, a group comprised of engineers from Carnegie-Mellon university and corporate sponsors / partners, including GM, Caterpillar and Continental. Tartan was facing off against other teams, each spearheaded by a university robotics group such as MIT or Cornell and joined by technology and automotive vendors, in a contest to create vehicles that give new meaning to the word “automatic.”
The contest is run by the Defense Advanced Research Project Agency (DARPA), the same government entity (along with Al Gore) credited with creating the Internet. DARPA set up a twofold challenge for each research team in a robotics competition to create a vehicle that could drive itself, unmanned, through any terrain, and created a twofold contest–one race in the desert and one in an urban environment–to test the entries.
NetApp has sold some of its small filers into military accounts for use on transport vehicles in combat zones, but this time didn’t contribute technology to the car itself–more like the car workshop, where NetApp storage was used to log and analyze data as the vehicle was developed.
Tartan and its creation, a Chevy Tahoe dubbed “Boss,” were the winners in the desert portion of the race, 170 miles which only three of 11 teams finished. Tartan also won the 60-mile urban course, in which 6 teams finished. The Discovery Channel will be covering the DARPA contest and all its entrants in a multi-part miniseries set to air in the spring, NetApp officials said; the network even brought in the stars of Mythbusters to act as TV analysts for the event.
The whole event was put on in fun, of course, but imagine the creepy possibilities of this technology: unstoppable, unmanned tanks storming cities; unmanned SUVs hunting in streets for the enemy. How would you even defend against something like that? It boggles the mind.
“Well,” pointed out Chris Urmson, director of technology for the Urban Challenge at the Robotics Institute at Carnegie Mellon, somewhat uncomfortably cutting off my Calvin-like woolgathering. “It’s not just a weapon.” Urmson pointed out that one of the primary use cases envisioned by the military for unmanned vehicles is the creation of a kind of trackless train comprised of driverless SUVs following one manned vehicle in the front, cutting down on the casualties associated with supply convoys in combat zones. The driverless cars also have possible commercial applications (Minority Report, anyone?) as well as a possible place in agriculture.
I just sometimes have a little too vivid an imagination.
Executive holdovers from Veritas are growing fewer and farther between at Symantec, following the disclosure by Symantec in an SEC filing this week that Kris Hagerman, president of the Data Center Management group, has left the company. According to a Reuters story on the departure, Symantec has also confirmed that senior vice president James Socas has also left. Socas was senior vice president of Corporate Development, leading the mergers & acquisition practice.
These departures are the latest in a general exodus of former Veritas officials from Symantec since the companies merged in December 2005. Veritas CEO Gary Bloom and head of the data management group Jeremy Burton also left Symantec before Hagerman. Beyond saying “a number of factors contributed” to the departure of Hagerman, Symantec declined to specify a reason Hagerman walked away.
Symantec has been keeping up with frequent updates to its Veritas products, especially the “meat and potatoes” backup software products, Backup Exec and NetBackup. But its sales numbers and ranking in the market have continued to decline. On its most recent earnings call Symantec CEO John Thompson said the company would begin pruning back its Data Center products, but did not give specifics as to which products would get the axe:
Earlier this year, we started an active review of our product portfolio to ensure many of the investments made over the years are meeting our return expectations. During the September quarter we identified some non-strategic assets in the Data Center Management Group that have not met those expectations. As a result, we are taking an $87 million writedown of some assets in the data center management group acquired by Veritas during the 2003/2004 timeframe. Going forward, we will continue to evaluate our portfolio to ensure that we focus our investment efforts on a few key strategic areas that drive long-term revenue growth.
Meanwhile, what had been a vibrant brand in Veritas has started to fade from the market, according to analysts. “Since the acquisition, Veritas has not been as visible in the industry as it was–they used to be a very engaged company and took a leadership position with technology,” according to Taneja Group founder Arun Taneja. “After a year of hiatus following the acquisition, Symantec is keeping up with products, but they’re not providing leadership like Veritas used to.”
For years, Dell has made noise about pushing deeper into storage outside of its partnership with EMC – all the while maintaining that partnership.
The deal tells us Dell is clearly interested in becoming a bona fide storage vendor, and it took the best route possible to making that happen. It also tells us it is a matter of time until its partnership with EMC falls apart, despite an agreement that runs through 2011.
Nobody from Dell or EMC will say that. Their party line is the EqualLogic products fall into Dell’s SMB PowerVault platform, and Dell and EMC will continue to co-market mid-range Clariion systems.
That argument doesn’t hold up for several reasons. First, EqualLogic’s higher end systems are not SMB plays. The PS3000 it launched a year ago has a starting list price of $65,000 – more than twice that of most PowerVault products. EqualLogic has always considered midrange storage titans EMC, Hewlett-Packard and Network Appliance its main competition. And EMC people privately consider EqualLogic a genuine midrange competitor. EqualLogic can help Dell sell to SMBs, but it has also been adding features such as thin provisioning and virtualization capabilities to make its SANs more enterprise friendly. Is Dell going to scrap those technologies? That’s unlikely.
Then there is the iSCSI factor. By buying EqualLogic, Dell is betting most of its storage chips on Ethernet-based iSCSI. That’s no surprise. Dell’s business is built on Ethernet. But EMC’s is built on Fibre Channel. While Fibre Channel vendors have come to sell iSCSI and accept that it has benefits, they’re not betting their business on it. The major Fibre Channel vendors have even created a new protocol — Fibre Channel over Ethernet (FCoE) — aimed at stunting iSCSI’s adoption. Down the road, the paths of Dell and EMC will diverge over iSCSI and Fibre Channel.
Finally, there is the personal relationship factor. The EMC-Dell partnership benefitted from a close relationship between their respective CEOs, Joe Tucci and Kevin Rollins. Tucci even showed up unannounced at a Dell Technology Day last year to show support when angry investors were calling for Rollins’ head. Now Rollins is gone, and founder Michael Dell is back at the helm. Nobody’s saying Tucci and Dell don’t get along, but it’s not the same as Tucci and Rollins. And there’s no guarantee that EMC fits in Dell’s plans to turn around his company.
The EMC-Dell marriage made great sense when it began in 2001. Both companies were staunch competitors with Hewlett-Packard and IBM, who sold servers and storage. So instead of EMC making its own servers and Dell manufacturing storage, they partnered. And the relationship worked out until now — Dell is responsible for about 16 percent of EMC’s storage systems revenue and around one-third of its Clariion sales. But the landscape has changed, accelerated by Dell’s purchase of EqualLogic. The main question is: how long it will take for divorce papers to be filed.
I know, I know … whenever I see “killer app” I roll my eyes too, but I’ve been shown the light, or indoctrinated, whichever way you want to look at it. So here is my take: I’ve found storage’s killer app, and it’s hiding in plain sight.
There … I said it … EMAIL.
At work we’ve been deeply involved in identifying the platform for our next generation SAN. We’ve been busy identifying performance metrics (the benchmarks I’ve taken off some of these machines are incredible–I’ll blog about it soon!!) and precisely what we use our storage for. I’m a big VMware fan and throughout this discovery process I’ve had in the back of my head that VMware is the biggest thing we need to plan for. I was looking at VMware/host virtualization as the killer app for storage and I was wrong.
While we don’t and wouldn’t actually look at what is in end users’ mailbox, we do see the size of the mailboxes. We have quotas for most but some folks warrant an exception. It dawned on me during this process that we’ve been building out our Vmware infrastructure to mainly provide the filler for messaging and collaboration. I’m a cynic by nature so I decided to take a deeper look into my own email habits to see if this theory held water.
If you ever really wanted to surprise yourself, take a look into the dusty corners of your home machine’s email program.
So my basic conclusion is my dinky laptop hard drive is no place for my archive.pst. I need RAID 10 striped across 14 drives … for just the imbroglio (this was a great SAT word submission!!) I call my inbox. How many attachments I have, how much email I actually get, how much of it I keep, how it gets indexed by my desktop search, and finally how it gets archived all lead to a surprising portion of my work laptop hard drive dedicated to email. I also started looking at where I pull the information from to put in my email and found that almost all the work I produce gets emailed to someone, then they store it or email it to someone else.
At work I have about 800MB of active email data and about 4GB archived for the last 6 months. At home it’s triple that–I actually had to build a virtual machine to handle my third-tier email archives (I like to have my email indexed and available) which leads me to why I believe email is the killer app for storage.
My personal email is always online, indexed and searchable. If I need a piece of information and I can remember one or two unique words I stand a great chance of retrieving it from just about anywhere (I use IMAPS) that I can install Thunderbird or Evolution. It is very convenient and once I figure out how to search from my BlackBerry I won’t ever have to remember anything but keywords.
This convenience of course requires storage, and not only that, but storage that can chew through tons and tons of 2K and 4K files (I use Postfix and Cyrus IMAP on virtual machines at home) to find the bit of information I’m looking for.
Scale me to an enterprise level, think Ultraman. Today there are people in the work force who don’t know what POP is (remember PINE? Ahh the good old days…), have been using their Yahoo.com email account since they could type, and email things to themselves when they want to save it. There is a new generation coming to the workforce after them that EXPECT to be able to have their entire lives searchable and indexed, Google-style.
At work we are moving towards the Sharepoint 2007 /Exchange 2007 /Office 2007 hegemony in the next year or two, and I am concerned that we as an industry don’t really and truly understand what collaboration does to storage requirements.
If I can now collaborate completely on my computer how long will my organization have to keep my Onenote stuff around? Will there be some sort of e-discovery for the group whiteboard? Where will we store all this stuff?
It may be a little early in 2007 to start prognosticating about what is going to occur in 2008 and beyond. However, there are some major trends — I would almost classify them as “mega-trends” — that I see taking shape. These trends indicate that, at a higher level, storage management is shifting from managing bits and bytes to treating storage as a cheap, abundant commodity that can be used to solve specific business problems.
Nowhere is this more evident than in the increasing number of small and midsize businesses (SMBs) who are switching to online backup. Though this trend started some time ago (some vendors noticed a serious up tick in business about 18 months ago), this trend should only accelerate in 2008.
Backup is strategic to SMBs only in the sense that SMBs recognize they need to do it and that they need help doing it. If they can outsource it for about the same cost or slightly more than they are paying now with a high level of assurance that it will work, most will do it.
Contributing to this trend is that backup service providers are maturing to become managed service providers (MSPs). They no longer provide just online backup and support user-initiated recoveries. They are diversifying to provide an entire range of data management services that SMBs need such as archiving, data classification and different tiers of disaster recovery services.
MSPs still are at different stages in providing these services and, for now, users should still view these new service offerings with a fair amount of skepticism. However, it is reasonable to assume that by 2009 MSPs should have many of the kinks worked out and will offer more robust data management services.
Another trend that is emerging is the need for storage managers to develop a close relationship with one’s legal department. This is significant because the way IT manages data going forward will be driven as much by their corporate legal departments as it is by internal business applications. “Just keep it all” or “Delete it after three years” may be good starting points for data management but the world has become much more complicated than that.
Andrew Cohen, who handles EMC’s legal department and corporate compliance, cites cost, legal statutes, defensible data management policies and e-discovery as the specific reasons that data management polices need to evolve and for IT and legal departments to work more closely together. Yet, for storage managers to focus on broader business and legal issues, they must put into place a storage infrastructure that doesn’t require their constant attention and is self-managing and self-healing.
That leads to the last major mega-trend I see emerging in storage: clustered storage. Anyone who deals with storage on a day-to-day basis knows that storage is anything but self-managing and self-healing — especially when used in a storage network. It anything, I would characterize most current storage network designs as exactly the opposite: self-destructing and self-defeating.
Clustered storage is shaping up to take one of two forms: clustered storage systems and virtualized storage. From a best practices point of view, clustered storage systems (sometimes called grid storage) from NEC, Isilon Systems and Panasas can create one large logical storage pool that are probably the best option. However, that model often requires companies to standardize on a storage vendor’s product which may or may not fit with how companies procure their storage.
Virtualized storage is accomplished using a network based storage virtualization product such as EMC’s InVista or Incipient’s iNSP. These products aggregate existing storage systems to present one logical storage pool to the server infrastructure as well as creates a common console to perform common storage management functions such as data migrations and provisioning.
How soon these emerging mega-trends come to pass remains to be seen. But dropping storage costs, the need for tighter relationships between IT and legal, and maturing storage technologies are contributing to the likelihood of these trends getting a foothold in 2008 and accelerating from there.
Sun has launched yet another set of countersuits against NetApp, this time in California. “Sun was legally obligated to respond in Texas to the initial suit brought on September 5, 2007 by Network Appliance to forestall competition from the free ZFS technology,” Sun said in a statement emailed to press this week. The statement continued:
Today we filed additional counterclaims in California, and specifically under the Lanham Act and California Business and Professions Code, based on Network Appliance’s false statements to the public about the alleged use of Network Appliance patents in ZFS. In parallel, we will be bringing a motion before the court in California asking that the case filed in Texas be consolidated with the case filed today for trial in the Bay Area, headquarters to both Sun and Network Appliance. Today’s filing includes counterclaims against the entirety of Network Appliance’s product line, including the entire NetApp Enterprise Fabric Attached Storage (FAS) products, V-series products using Data ONTAP software, and NearStore products, seeking both injunction and monetary damages.
Since Sun was forced to litigate, we feel California is a more appropriate venue to do so for several reasons. First, Sun and Network Appliance are both headquartered in Northern California, within 10 miles of each other. Second, most discovery will take place in California, as many of the key inventors on the patents and primary counsel for both parties are based in California. From both a judicial and economic standpoint, it makes much more sense for the case to be in California.
Sun has accused NetApp of “venue shopping” by choosing the Eastern District of Texas. A Sun statement responding to NetApp’s original suit called it “a legal jurisdiction (East Texas) long favored by ‘patent trolls.'” The choice of district has been the source of head-scratching even from people who are still reserving judgement, given that the two companies are both located in California, as Sun’s statement points out.
Who knows what the truth is. It could also be that since the district has a history of patent litigation, NetApp might have felt that court would be better able to discern the truth out of the he-said she-said better than a court with less experience in California.
But the longer this goes on (and boy, has it gone on), the more I start to think that even with the technical background I’ve picked up and the familiarity I have with both companies from covering storage for years, I’m not sure I would be able to sort out who’s right here. If this ever gets to trial, I do not envy the judge or potential jurors. Not one bit.
Of all the storage companies competing to sell data deduplication, Quantum is unique. That’s because it is primarily a tape vendor and data deduplication was developed to replace tape.
Look at some the other vendors involved in what Data Domain CEO Frank Slootman calls a “land grab” for deduplication customers. Domain, Sepaton, Diligent Technologies and FalconStor sell virtual tape libraries (VTLs), EMC and Network Appliance sell massive disk arrays, and Riverbed sells WAN optimization. Their sales forces wouldn’t know LTO-4 tape from masking tape.
Then there is Quantum, which after gobbling up rival ADIC last year will sell close to $1 billion worth of tape products this year. Quantum CEO Rick Belluzzo isn’t buying into the “tape is dead” line you hear from most de-duplication vendors.
“Tape will continue to have an important role,” he said. “Very few customers are looking to go tapeless.”
Quantum won’t be the only tape vendor selling deduplication devices for long. Overland Storage will come out with its deduplication appliance soon. Still, most deduplication vendors disagree with Belluzzo about the long-term future of tape. Sepaton spelled backward is “no tapes,” and the company was built on the premise that tape is going away. So was Data Domain, and Slootman says when Data Domain sells appliances, “We replace tape in almost every instance.”
Belluzzo said that’s because Data Domain sells to remote office and midsized companies. Quantum’s strategy is to push into the enterprise, with the DXi7500 enterprise system coming in a few months to go with Quantum tape libraries. He says it doesn’t have to be one or the other in large shops.
“I hear our competitors say, ‘It’s clear that tape is dead.’ That has no credibility with customers,” Belluzzo said. “We still sell tape. We see tape replacement along the edge, where they collect data and replicate it to the data center. But tape plays a critical role in centralized data centers and consolidated SAN backup schemes. The whole story is, in midsized and enterprise data centers, people are buying disk and tape together.”
Quantum claims 120 customers for its DXi de-duplication appliances over the last six months. Market leader Data Domain has about 400 over that same period.
Another area where Quantum does a balancing act is with its de-duplication patent. With deduplication’s popularity rising and other vendors looking to get into the act, Quantum could license its technology and let others sell it. Data Domain paid a $5.4 million royalty for the patent earlier this year. And Quantum is suing Riverbed for patent infringement.
Belluzzo said Quantum is a product company, so licensing its technology takes a back seat. He won’t rule it out, though.
“It’s always a balance you face: do you hold onto it and let the market work around your, or do you exploit it for commercial purposes and let the market come to you?” he said. “We’re trying to balance that now.”
I have yet to get a letter from an institution with which I do business that starts like this:
Dear Current or Former PEIA, WVCHIP, or AccessWV Member:
We are writing to you because of a recent data security incident. On October 16, 2007, a mainframe computer tape containing your and your dependents’ name, address, and social security number was reported as lost by United Parcel Service (UPS) while en route to PEIA’s data analyst.
But the longer I stay on the storage beat, the more I feel like the day is coming. Continued »
Hitachi GST is back at it again this week with another update to its disk drives, this time with a redesign of its desktop SATA and PATA drives for power efficiency. Hitachi claims the updates to its silicon on the new Deskstar P7K500 drive can reduce the drive’s power consumption by up to 40 percent–or down to as low as 6 watts when active and 2 to 3 watts while idle.
The new specs were accomplished in a couple of different ways, one of which is the use of a new system on a chip model for power modules, and changing the power regulator on each drive from a linear architecture to a switched one. The moves were made with the new Energy Star 4.0 spec for PCs released in July, which allots a “budget” of 50 watts in idle mode for the whole system while idle, of which it’s estimated 8.3 go to the disk drive. With the new 250 GB version of the Deskstar, Hitachi is claiming a draw of 3.6 watts in idle mode, and 4.8 watts for the 320, 400 and 500 GB models.
This savings won’t necessarily make a dent in anyone’s home electric bill, according to Lee Johnson, 3.5-inch Product Marketing Manager for Hitachi. “But with the additional watts left over, PC makers can use that added wiggle room to design PCs with more RAM, more features on the motherboard, or a higher processor clock speed,” she said.
Hitachi plans to add similar power-savings technology to its enterprise-class drives, but IDC’s John Rydning says that may not necessarily be practical–nor lead to significant cost savings in enterprise disk systems.
“At the enterprise level there’s not a lot of impact on the overall system by reducing idle drive power draws,” he said, noting that turning drives completely off through MAID is the way the enterprise is headed. “But if you’re a large enterprise organization with hundreds or thousands of PC workstations, this might make a difference.”