The glitz and glamour of new product releases tend to overshadow the rather mundane task of performing firmware upgrades on storage systems. However, administrators who take the time to keep their storage systems up-to-date with the latest and greatest patches for their storage system may find they can avoid some FC SAN “gotchas” as well as find some hidden gems that vendors are packaging in their latest firmware releases.
Prompting my thoughts on this topic was a recent conversation I had on this topic with a storage architect. He recently inherited a FC SAN where the firmware releases on the storage systems were two major releases back. The older code on these storage systems was becoming a problem since other devices on the SAN (switches, virtual tape libraries, and servers) had newer firmware with new features, but in order to take full advantage of these newer features, the storage systems also needed newer code.
I discussed this topic with EMC partly because the storage systems in question were EMC Clariion, but also because I know from personal experience that EMC releases firmware updates a fairly regular basis.
In the case of its Clariions, EMC comes out with a major release every 9 to 12 months that includes major new functions. For instance, its December 2006 code release for the Clariion included a new proactive hot spare feature for improved high availability and a Quality of Service feature as a licensable add-on. Its August 2007 Clariion major release added new security features as well as iSCSI enhancements like native replication.
Another interesting feature included in the update is the Software Assistant. This tool scans the Clariion prior to starting a firmware upgrade and provides recommendations as to which code an administrator should load on the system. The Software Assistant also does a high availability check prior to actually starting the upgrade to confirm that firmware upgrade can be completed without unexpectedly taking the system offline.
EMC recommends to customers that they install major firmware releases for its Clariions shortly after they are released (within 3 to 4 months).
However, there is a more pressing reason to ensure that firmware code is current. When doing firmware upgrades one must apply them sequentially. If a Clariion system is two generations old, customers may need to upgrade to the intermediate release before upgrading to the newest release. Though this is generally not a big deal, it does add to the length of the time needed to perform the firmware upgrade and makes it more difficult to back out of an upgrade should something go awry.
Over the last few weeks, storage insiders have been abuzz with speculation that a merger between HP and Symantec is imminent. Whether such talks are occurring, I can not definitively say, but if it does occur, the whole corporate world might as well kiss goodbye any hopes it had of creating and managing a heterogeneous storage environment.
Obviously, I’m exaggerating a bit. Kissing heterogeneity goodbye won’t happen the day such a deal is signed (if it occurs), and it probably won’t ever completely happen. HP and Symantec will likely both pledge that heterogeneous support will remain part of their product roadmaps. And, it’s likely that is true. However, one can almost bet that when it comes time to prioritize which storage products are tested first in conjunction with future releases of Symantec’s Veritas storage software that HP’s storage products will find their way to the head of the line.
More disconcerting is what Symantec’s acquisition by HP (or whoever they are acquired by or merge with) would mean for the future of heterogeneous storage environments in general. At one time, Symantec was on the vanguard of supporting an enterprise heterogeneous storage environment. Yet, now no one is really shocked or even appears overly concerned when Symantec is mentioned as a candidate for an acquisition or merger by what is traditionally considered a storage hardware vendor.
This mindset is testimony to changing user concerns and priorities. It used to be that storage hardware was the primary cost in user data center. Not anymore. Now, it is the management of the storage hardware — even if a user buys all of the hardware from the same storage vendor.
The complexity associated with managing storage hardware from multiple different vendors has become a mind-boggling exercise. While at one time it may have been worthwhile to spend the extra time and money to verify if an HP-UX server worked with an IBM storage system, now it is questionable if that is still the case. Instead I sense an increased willingness on the part of users to pay a premium to buy all of their storage hardware and software from one vendor and avoid checking multiple different support matrixes that using heterogeneous environments requires.
The looming acquisition or merger of Symantec, regardless of by who, signals the re-emergence of an old systems management philosophy. Companies no longer want a one trick pony for their storage management needs, even if that one trick pony manages heterogeneous storage environments. Instead more companies appear to want a return to simpler times where they buy all of their storage hardware and software from one vendor that all work nicely together. Let’s just hope that if companies have to revert back to this philosophy, that it works better this time than it did in the past.
This was our first Storage Decisions conference in the hilly city built on a fault line, and that meant a fresh crop of Storage Decisions attendees and happenings.
Sun held a “trends and innovation” dinner for press and analysts concurrent with the show on Monday night (it wasn’t affiliated). About two dozen Sun execs and their audience sat down to a gourmet repast at San Francisco’s trendy Absinthe restaurant. Execs and Sun reps including Chief Technology Officer and Executive Vice President of Research and Development Greg Papadopoulos, CIO Bob Worrall, Executive Vice President of Systems John Fowler, and distinguished engineer Subodh Bapat.
As always, Sun was articulating grand visions of the future. “The storage marketplace is about to undergo its most rapid set of changes possibly ever–it will change the economic fortunes of a number of companies,” Fowler predicted (Sun is hoping this will hold true in a positive direction for its storage products). Cost per capacity will be “one-tenth of what you see today.”
Like fellow large players IBM and EMC, both of whom have recently acquired storage service-provider companies, and Symantec, which is preparing a software-as-a-service (SaaS) backup offering, Sun is keen on outsourcing as well. Eventually, according to Bapat, there will only be a few “really big computers” in the world run by companies like Microsoft and Google in “mega data centers” like Google’s famed farm of PCs. Sun would also like to become a service provider itself, but their real focus is on selling equipment into those service-provider data centers. Sun was already part of a similar build-out in the telecom industry in past years, though it was also pointed out that companies like Google have already done their build-outs just fine without Sun.
Meanwhile, new “mega data centers” are beginning to spring up, including a new 500,000 square-foot, 50-megawatt behemoth being built for a national lab set to open next year, according to Bapat. “50 megawatts is bigger than a small city would consume,” Bapat said. “Utilities are going to become a real problem.”
Bapat also predicted that within the next year, a major data center failure will “cause major national effects, and bring forward the importance of data centers as national assets.”
Sun loves to look out 15 years, but ask about the next 15 months and it’s a trickier question. Sun’s recently announced partnership with Dell is part of its attempt to position itself better in the market; Sun will also be going after service provider customers such as SmugMug, according to Worrall, and developing server-farm products with its partners at research universities. How that’ll translate into specific products and sales remains largely unclear.
Sun is on to something when it comes to Fowler’s prediction about the pace of change over the next year, according to Taneja Group founder Arun Taneja. “We’re in such a vibrant market right now,” he said. “I have never seen so much change and innovation happening all at once, ever.”
Some visuals from the show floor (click to see larger versions, mouseover for descriptions)
Everybody’s favorite user-blogger Tory Skyers was Mr. Storage Decisions this year, presenting on the storage issues presented by new mobile devices and participating in a user panel on storage management. Skyers warned users not to overlook the trend toward iPhones and home servers. “An executive buys a home server, plugs in his laptop at home, and the home server asks, ‘wanna back it up?’ Then his kid comes home with the Trojan du jour and suddenly your company’s data is in the Eastern bloc somewhere.”
The flow of data leakages happen both ways in the mobile world, he added, with mobile devices blurring the line between personal and corporate data repositories. “So mp3s and AVIs and maybe even that Trojan find their way to the laptop, which finds its way to your data center, which finds its way to your SAN and your network.” Tory gave some how-tos on controlling some of that flow of information on both sides of the equation, including “using social networks in your work environment to enforce policy”–specifically, a “Page of Shame” for violators of company storage policies pertaining to mp3s etc. and strategically placed rumors of “someone getting busted” for violating policies. He recommended tools like Desktop Authority and Powerfuse for content filtering and executable monitoring for contraband files, using open-source and free Microsoft tools to create document templates for data classification, and Surfcontrol Mobile Filter to restrict access to Websites and protocols even when users are off the network and VPN on company machines. Desktop Authority and Powerfuse will also restrict which mobile devices can be plugged in to a corporate machine–a USB mouse will get through but not a thumb drive or iPod.
“This is a better alternative to sealing your USB ports with epoxy,” something Tory said he’d been asked to do before (by an exec who then realized he had no way to plug a mouse in to a $2500 machine).
In the course of his presentation, Tory also referenced the following tidbit from CNN: customs and border guards can confiscate anyone’s laptop without any grounds for suspicion and copy all the information held within it. Terrifying.
Some more visuals from around the conference:
On Wednesday users gathered for a peer discussion on virtualization that turned up some interesting things, including–be still our hearts–an actual, living, breathing, Invista user (we wanted to take his picture). Very few of those present have actually deployed storage virtualization and those considering storage virtualization tools were also in the minority among this group. “I’m wondering what the benefits are that other people have seen to virtualization, what the return is,” said one user.
The majority of users saying they’d begun virtualizing are using HDS. Almost all users with storage virtualization in place said they used it to front other arrays from the same vendor, with the exception of migrating data from decommissioned storage. “You just don’t want to get into finger-pointing with the different vendors,” according to one attendee.
Three months after filing to become a public company, NAS vendor BlueArc has pushed back its scheduled IPO until 2008 according to industry sources.
Citing SEC regulations, BlueArc declined to comment on its IPO schedule. But several industry and financial analysts familiar with the company say its bankers have decided to hold off on going public. BlueArc filed to for its IPO Sept. 7, and it normally takes a company about three months to begin trading shares as a public company. But BlueArc has yet to set its expected share range or go on its roadshow that precedes an IPO. There is usually a gap of at least two weeks between the share range and IPO, which means BlueArc would run smack into the holiday season if it decided to become public by the end of 2007.
It’s not clear why BlueArc decided to wait, but it’s likely that the company and its bankers anticipate a lower price for shares than they originally expected if they go public now.
“I’m sure market conditions haven’t helped,” said a banker for a securities firm who is not involved with BlueArc’s IPO.
A financial analyst who follows storage said investors “are doing more due diligences on IPOs now,” and said it could hurt BlueArc that the stock prices of storage systems companies Isilon and Compellent have dropped drastically since their recent IPOs. The analyst said BlueArc might want to show another quarter of solid growth to help its case. There is also the possibility of an acquisition. IP SAN vendor EqualLogic had filed for an IPO before Dell scooped it up for $1.4 billion last month. Storage insiders agree that Hitachi Data Systems is the most likely suitor because it has an OEM deal to sell BlueArc NAS systems and an equity stake in the company.
Even without BlueArc, 2007 was an active year for storage IPOs. Pure storage companies Compellent, 3PAR and Data Domain all went public, along with EMC’s partial spinoff of server virtualization company VMware and InfiniBand suppliers Mellanox and Voltaire.
Just when I think that I have heard every reason for keeping data on tape, new arguments keep emerging. Now the latest is that tape is more energy efficient than disk.
My first real insight into this came a few weeks ago when I was speaking to Spectra Logic’s director of technical marketing, Molly Rector, who had just returned to Denver after meetings with Spectra Logic channel partners, resellers and users in the New York and Boston area. The feedback that she received from her meetings was that some data centers in the Northeast were running low on power and no longer able to obtain new power. In these cases, the shortage of power was forcing their customers to choose tape because it was more energy efficient than disk even though they wanted to buy disk for their backup environment.
While it may be true that tape consumes less power than disk, it is disconcerting that some companies find themselves in this predicament of needing to choose tape over disk because of something as seemingly preventable as an inadequate supply of power.
Keeping data on tape costs businesses in ways that are sometimes hard to measure. Legal discoveries, the personnel needed to manage tape and moving and storing tapes offsite all add to the costs of tape management and also consume power in more subtle ways. To somehow conclude that the choice between disk and tape somehow needs to stop and start with a company’s rate of energy consumption seems a bit archaic to me.
Tape may consume less power than disk, but that does not necessarily make tape a better choice. Disk and tape are both choices that companies need to have available to them and either one, if managed properly and looked at from a total cost of ownership, can save companies money and cut energy consumption in the process.
Companies in this situation are obviously looking at some hard choices in the near term as their choices are less about the choice between disk and tape but if it is time to change how and even where they manage their data. In the Northeast, it appears some companies have already waited too long to make a decision because when the number of outlets left in the wall dictates what storage media they need to buy, the only choices left are unpleasant ones.
In doing some research recently on the problems associated with recovering data from old tapes, I found out that a similar set of problems exist when trying to recover data stored on old disks. This problem becomes especially pronounced if a company unplugs an old disk drive and puts it on the shelf or keeps it in production too long.
The problem that companies are more likely to encounter when storing a disk drive on the shelf is not necessarily data degradation on the disk drive platter but mechanical failures of the parts within the disk drive itself. Greg Schulz, the lead analyst with Minneapolis-based StorageIO, finds that the lubricants of the mechanical parts within the disk drive can settle. This can cause the disk drive to malfunction when the company attempts power it up again for the first time in a long time.
Jim Reinert, VP of disaster recovery for Kroll Ontrack, a worldwide provider of data recovery services, says that the largest problem Kroll encounters with trying to recover data from old disk drives is repairing and replacing defective mechanical parts inside the disk drive. Motors failing and electronic circuit boards going bad are just some of the components Kroll has had to repair before it can recover the data from the drive. This situation requires Kroll to find an exact match for the defective part, usually on the used market.
Of course, mechanical problems can also occur while the computer system is still in use. Reinert finds that some of the toughest data to recover is found on older, proprietary computer systems that are in use but break. Typically found in manufacturing and production environments, these are older computer systems that control a piece of equipment that everyone uses but no one manages. As a result, the data is not backed up nor does anyone know who created the application or how it runs.
So, what’s the best way to protect data on old disk drives? The best and simplest way is to avoid keeping data on old disk drives and migrate data to newer disk drives. Kroll Ontrack classifies disk drives over five years in age as “old” since by this time disk drive warranties have usually expired and parts for the disk drive are out of production.
Schulz is a little less dogmatic about the five year cut-off. He finds that disk drives that are up to seven to eight years in age are probably OK depending on what condition in which they were stored or how they are used in production. He suggests spinning them up on a regular basis (once every 3 to 6 months), though he agrees that as disk drives age, administrators should migrate the data to newer drives.
If a disk drive has already failed or you come across one of indeterminate age or condition and you don’t know what data is on it or its value to your business, your best bet is probably to send it to a data recovery specialist and keep your fingers crossed.
As Dell proved when it decided to drop $1.4 billion on EqualLogic earlier this month, large storage acquisitions have not gone away just because startups have found a lucrative IPO path and EMC is taking an M&A breather to integrate its new toys.
Hewlett-Packard, a company a lot of people thought was getting ready to exit the storage business a few years back, is now the most likely to add to its storage portfolio through acquisition. Over the past few years HP has picked up AppIQ, OuterBay, PolyServe and Opsware, and more is expected.
There has been persistent talk about possible HP deals of varying sizes: small (email archiving startup Mimosa Systems), medium (IPO-eyeing iSCSI vendor LeftHand Networks) and blockbuster (struggling security-storage giant Symantec). While some of these rumors have swirled for months and are growing stale, don’t be surprised to see HP pull the trigger on at least one deal between now and its Dec. 11 Analyst Day. And storage is high on HP’s list of priorities these days.
“Storage is a place that we have interest in growing our position,” HP CEO Mark Hurd said during the company’s Monday evening earnings conference call.
When asked specifically about storage acquisitions, Hurd refused to give details, “other than to say we continue to have a filter of something that makes strategic sense, it makes financial sense, and we can actually run and operate it.”
Hurd went on to talk about the importance of data storage in today’s corporate world, calling it a key attribute in the process of creating, moving, processing, visualizing, and printing content.
Hurd’s comments came after HP reported positive signs of storage growth after several disappointing quarters. Most of that growth came in the midrange and low-end. HP’s 7% increase in storage and 17% growth in its midrange EVA systems were on par with numbers recently reported by EMC and Network Appliance and well ahead of IBM’s storage performance last quarter. Hurd said he was also happy with the performance of low-end MSA systems.
The biggest storage disappointment for HP was that tape revenue declined, as did the high-end storage systems business that HP sells through an OEM deal with Hitachi Data Systems.
“There is still much room for improvement,” Hurd said of storage. “We still have a tape business that is not growing the way we would like and the high end is still behaving more like the mainframe market, as opposed to the mid-range market and the lower end of the storage market.”
Now we’ll see what HP does to, as Hurd put it, “grow its position.”
It was a treat for natives of the Boston area to go to the Museum of Science–most of us who hail from Massachusetts agreed the place is a staple of our childhoods. I haven’t been there in about 15 years, but was both surprised and delighted to find that most of the main exhibit areas I saw haven’t changed at all (leading one analyst to crack wise about EMC’s selection of venue for an “Innovation” day).
Here’s a photo of some of the usual suspects attentively listening to Joe Tucci give his entire PR staff heart attacks by revealing the code names of four new products set to be announced next year. (Said Tucci of the meeting during which objections to the pre-announcement were registered: “I asked, who do I see about getting that policy changed? And the arguments ceased.”)
Last but not least, here’s the newly appointed president of the compliance and archiving division at EMC (formerly chief development officer) Mark Lewis demonstrating a new Documentum interface called Media Space.
It’s showing an computer-generated sketch of a hypothetical video game console to show how the program can be used to collaborate on images.
Storage vendors look forward to the fourth quarter of every year because customers need to spend the rest of their annual budget, so they buy a lot of storage products. At this time of year you hear storage CEOs gleefully refer to “budget flush,” and the fourth quarter is almost always their top revenue period.
But this year that flush you hear may be their business going down the toilet. There has been a trend of CEOs from large tech companies warning that large enterprises and particularly financial services firms are spending less, and this could spill over to the rest of the business world.
Cisco, Symantec and IBM expressed those sentiments during their earnings reports over the past month or so. But while those companies sell storage, it’s not their main business. Storage is Network Appliance’s sole business, however, and CEO Dan Warmenhoven was no more optimistic during NetApp’s conference call Wednesday night.
Although NetApp had good results last quarter and gave a sunny forecast for this quarter, Warmenhoven pointed to a few ominous signs: business in North America was slow, outside of federal government. Revenues from NetApp’s 22 largest customers fell 4% over the last year. And he is worried that the problems with U.S. financial services firms could spread to the U.K., and other U.S. industries.
“I don’t see any pattern other than the financial services meltdown, and I would encourage all of you who are part of the financial services – especially broker-dealer organizations-to please keep that among yourself,” he told financial analysts on the call. “Once you start exporting that set of problems to the rest of the economy, everybody is going to go in the tank.”
Warmenhoven’s take on the economy was gloomier than EMC’s Joe Tucci’s comments last month when he said sales to financial services were “all right” but “nothing to write home about.”
But storage vendors’ gloom can be good news for customers looking to buy. NetApp execs spoke of “selective pricing” and pricing bundles on their conference call. They refused to get specific when analysts tried to pin them down, but it sounds like this could be a good time for a discount.
Riverbed is going on the offensive in its patent lawsuit infringement battle against Quantum by filing its own claim against the backup vendor.
In response to Quantum’s patent infringement suit filed five weeks ago, Riverbed this week filed a counter-claim in Federal District Court in San Francisco charging that Quantum’s data deduplication products infringe a Riverbed patent. Quantum’s original lawsuit accused Riverbed of infringing a Quantum patent in its wide area file services devices.
A Riverbed press release quotes general counsel Brett Nissenberg saying Riverbed carefully evaluated Quantum’s DXi series of deduplication products after the original lawsuit, and found that Quantum infringed on Riverbed’s patents.
Basically, Riverbed is saying “Not only didn’t we infringe on your patent, but you infringed on ours.” Now Riverbed is seeking what Nissenberg calls “substantial compensation” from Quantum, along with an injunction preventing it from using the technology.
If this all sounds familiar, it is. Substitute Quantum for Network Appliance, Riverbed for Sun and deduplication for ZFS and you have a similar situation going on with the storage system vendors. The difference here is, Riverbed and Quantum aren’t exchanging verbal smacks via executive blogs.
Riverbed CEO Jerry Kennelly did address the issue during the company’s earnings call last month, calling the lawsuit “meritless.”
“We know our products,” Kennelly said. “We have read their patents and we do not infringe their patents and we tried to tell them that for eight months.”
Quantum had a similar response to Riverbed’s legal action. “We believe that Riverbed’s counter-claim is entirely without merit and will obviously defend ourselves vigorously,” a Quantum spokesman said in an e-mail to SearchStorage.