Storage Soup


August 6, 2007  9:03 AM

The Linux effect on storage

cgibney Carolyn E.M. Gibney Profile: cgibney

Linux is currently used in about 20% of the medium to large sized data centers, and according to some reports, it will be in some 33% of data centers before the end of the year. By 2011, it is expected that most data centers will have at least half of their environment running some flavor of the Linux OS. As this platform really begins to settle in, it is important to consider the ramifications that it will have on storage, data protection and disaster recovery.

When I look at how a supplier handles coverage of a platform, I compare it to the games checkers and chess. When a supplier has “checker coverage”, that means they have just enough support of the platform to be able to get a check mark. When I say they have “chess coverage”, that means they have deep coverage, including specific databases that are popular on the platform.

Looking at the foundation of data protection, backup software is a good place to start. Most of the major suppliers certainly have “checkers” type coverage of the Linux environment. Most have Red Hat and maybe Suse variants covered, but some still only support Linux as a client, meaning that the Linux servers cannot have locally attached tape. As your Linux environment grows, this can be a real problem. A handful of the backup software suppliers have also ported over their Oracle hot backup modules, and while Oracle on Linux is significant and growing, the MySQL install base seems to be growing faster. And, while in the past the size of the MySQL data set was not nearly as large as the Oracle data set, it seems to be catching up there as well. A little farther behind is PostgreSQL, but it still has a significant install base and it too seems to be growing. So, it is important that your backup application supports more than just Oracle and can do more than just hot database backup, being granular to the table space level to help with faster backups and recoveries for example.

There are backup applications that support Linux completely, and there is no longer a need to sacrifice. This may mean supporting two backup applications in the enterprise: one for Windows and one for Linux. But, as I have said in past articles while not ideal, that is not unacceptable, especially if it means you significantly improve your level of data protection on the second platform. You may find that your new product provides as good as or even better support than your original one.

When looking at core storage the situation is equally interesting. For block-based storage or SAN storage, basic support or “checker coverage” seems to be there across the board. Most of the SAN vendors support fibre attaching Linux servers to their SAN storage and their growing support for iSCSI connections. There is not much support beyond this basic connectivity though. There is limited support for boot from SAN.

Interestingly, when it comes to SAN-based storage the manufacturers have created modules for specific applications that allows their SAN arrays to better interact with them. For example, they might have a module for Exchange that will quiesce the Exchange environment, take a clean snapshot and then mount that snapshot to a backup server for off-host back up. Despite the increased growth of the Linux install base, and especially the growth of MySQL and PostgreSQL in that environment, we have not seen many specific tools to protect these increasingly critical applications. You can write scripts to accomplish the above, and in many cases now you have to. But, it would be better to have this integrated into the storage solution, so you can avoid all the issues that surround homegrown scripts.

With Linux and NAS based storage, you have to be equally careful. The Linux file system is Unix, so that means working with a Windows Storage Server based NAS can often be problematic. In all fairness, a Linux based NAS often has problems with Windows clients. There are two options here. You could focus on the Tier 1 NAS providers that have the Unix and Windows files system differences mostly resolved. This has challenges in cost, but provides comfort and reliability. Another option is to use a virtualized network file management tool. With a network file management product you can have both a Windows NAS and a Linux NAS and have data directed to the appropriate NAS based on data type, allowing for a seamless support of both file systems. Of course, a network file management product delivers far more than this. For example, it can enable a migration of data as it ages to a disk-based archive or it can help with migration to a new NAS platform all together.

Disaster recovery is another point of consideration. If you are replicating at the SAN level, then the SAN storage controller itself can cover most of this. But if all of your Linux data is not on a SAN then you may have issues with replication of disaster recovery data. With the available replication software applications, you have some very Linux focused applications but not many that can cover the enterprise. Replication is an area where you don’t want to have too many different tools to monitor and manage. Focus on finding a solid multi-platform tool than can replicate Linux, Unix and Windows data.

Linux is going to be increasingly important in enterprises of all sizes and it seems that the traditional market leaders in storage are going to ignore the platform or give it just “checker” type of coverage. The new players on the market are taking advantage of this and are moving quickly to fill the void. It is interesting to note that most of the manufacturers that have a strong Linux solution also have an equally strong Windows and Unix solution. So, in only providing the very basic of support for Linux the market leaders may end up ceding the entire enterprise.

For more information please email me at georgeacrump@mac.com or visit the Storage Switzerland Web site at: http://web.mac.com/georgeacrump.

August 3, 2007  5:56 PM

Today CA delivered

cgibney Maggie Wright Profile: mwright16

Since my blog entry about CA posted yesterday, CA representatives and I have had a number of conversations about what I wrote and what the company has delivered in product functionality. In doing so, both of us have come to the realization that there were some misperceptions and missteps on both of our parts as to what I was asking about, and what products they actually delivered.

In terms of the context for the interview and the update I expected from CA, I was looking for what CA was doing to pull different data protection components together to manage them under one umbrella –whether its own components or those of competitors. Maybe I was unclear in articulating those expectations or they did not understand them – probably some of both.

I don’t for one second believe that integration is a trivial task. In fact, this may be one of the greatest challenges backup software vendors face this decade, and possibly the next, but that is also one of the reasons I am covering it. Archiving, virtual tape libraries (VTLs), compliance, continuous data protection (CDP), synchronous and asynchronous data replication and retention management are now all part of the data protection mix. Frankly, I’d be concerned if CA claimed it had fully integrated all of these components because analysts probably would have had a field day verifying, and likely debunking, that claim.

On the other hand, in conversations I have had with CA’s competitors, on and off the record, I sense that CA is starting to lag behind. There is nothing tangible I can point to, just a sense of the depth and quality of conversations I have had.

That is not to say CA is doing nothing, as yesterday’s blog post could have incorrectly led readers to conclude. In data protection, CA has focused on adding new features and integration to the XOsoft CDP and Message Manager email archiving products. To CA’s credit, they did bring up a good point, that companies still internally manage data protection and records management separately. So they first sought to bring out new features and functions in those products based on internal customer demand before tackling the overarching integration problem.

For example, since I last spoke to CA in February, a second integration service pack was released in March containing two features that I believe administrators will find particularly useful. Through the ARCserve interface, administrators can browse replicated jobs set up in WANSync and see the sources being selected for replication and the target replication servers. Then, when they need to restore jobs backed up from the XOsoft replica, the restore view in ARCserve provides a view of the production servers rather than the XOsoft WANSync Replica server.

The ultimate question remains, “Is CA doing enough and doing it fast enough?” Someone older and wiser than me once told me that it takes about 8 years for changes in storage practice and technology to work their way into the mainstream. Whether that holds true in the rapidly changing space of data protection remains to be seen.


August 2, 2007  6:29 PM

CA has yet to deliver

cgibney Maggie Wright Profile: mwright16

This last week, I had a chance to catch up with CA on what integration has occurred in their Recovery Management product line since I last spoke to them in February. Based upon what they told me in the first interview and the little progress they had made, I spoke to them again to make sure I didn’t miss something.

“Scrambling” is the word that Frank Jablonski, CA’s Product Marketing Director, used to describe CA’s efforts to pull together and offer customers some level of integration between their XOSoft and ArcServe product line. To that end, they have released two service packs to start to integrate these products.

The first service pack enabled ArcServe to use a script to create backups from CA’s CDP product — XOSoft WANSync. The script does the following:

  • Periodically stops the replication on XOSoft WANSync.
  • Takes a point in time copy of the data on the WANSync server.
  • Resumes the replication.
  • Backs up that point-in-time copy.

ArcServe then centrally manages that point-in-time backup which companies can use for longer term retention. The second service pack provided that same functionality but added a GUI interface.

However, I know many good system administrators that could write that script in their sleep. That leaves the integration with XOSoft WANSync and ArcServe at little more than a rudimentary level. Though it demonstrates progress, CA needs to accelerate their efforts in light of the announcements that CommVault and Symantec have made in the last couple of months.

To catch up, CA is planning major version upgrades of XOSoft in January 2008 and ArcServe in the spring of 2008. Jablonski promised users will see both product upgrades and more integration across CA’s different data protection products at that time. However, CA will likely not complete their integration efforts until about 2010 or 2011.

CA definitely has the potential and the software to offer users a robust data protection and management package. To CA’s advantage, backup software is generally not a product that users are apt to rip and replace. However, CA is currently trailing some other backup software vendors in their integration and delivery of new features. But, this will likely only begin to matter if I am still making blog entries similar to this one about their product a year from now.


July 31, 2007  2:39 PM

Service level agreement tutorial

Karen Guglielmo Karen Guglielmo Profile: Karen Guglielmo

Over the past month, I’ve been working on putting together podcast tips with some of our experts. Pierre Dorion, certified business continuity professional for Mainland Information Systems Inc., recently contributed a podcast called “Outsourcing backup: Get the right service level agreement”.

In this tip, Pierre discusses the questions that can help you ensure that a service level agreement (SLA) meets your requirements when outsourcing backup, such as:

  • What are your data recovery needs?
  • How fast can your data be restored?
  • What are the contractual obligations of the SLA?
  • Does your service provider have a solid disaster recovery plan in place?

Pierre also offers practical advice on making sure these questions get addressed. Check it out below.

Elsewhere on the Web, check out www.sla-zone.co.uk. It’s got a bunch of useful SLA information, broken down into topics such as services, performance, problem management, customer requirements, termination, and so on. Also, www.itil-itsm-world.com has a series of documents that are used to help build a framework for service management, including information on service level agreements and IT outsourcing.


July 26, 2007  4:11 PM

Storage analysts scratch their heads over HP / Bull rumors

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It is being widely reported that HP is in “advanced talks” to buy Bull SA, a French IT integrator. The reports, originated by a French website, capital.fr, contain detailed information about the potential price of the deal (approximately $1 billion US) and have been reprinted by sources including CNNMoney.com and Reuters. HP declined comment on the rumors.

Bull, whose major shareholders include the French government, has a storage business unit, though it’s mostly a channel/storage integration play. The company also deals in other IT products, including servers and networking equipment, and has a customer footprint mostly in French government as well as a few overseas state and local government agencies, according to storage industry analysts.

Otherwise, analysts said, they’re mystified at the potential merger. “HP would have an interesting job on its hands getting a sleepy company to wake up,” said Arun Taneja, founder and analyst for the Taneja Group.

The company underwent a restructuring at the turn of the millenium, refocusing itself on channel sales and systems integration. However, despite attempts to penetrate US markets since, 80 percent of its revenues come from Europe, with a full 40 percent from France alone. Prior to its restructuring, Bull had gotten into the business when it purchased Honeywell, a mainframe and minicomputer manufacturer that was ultimately left behind by the advent of the PC. (It’s a story similar to Digital Equipment and Wang, which went the way of the dinosaur when they couldn’t compete with IBM, Sun et al).   

“I can’t see what’s in [a potential acquisition] for HP other than the acquisition of a customer base for servers and storage,” said John Webster, principal IT advisor with Illuminata, Inc.


July 26, 2007  4:11 PM

Storage bloggers dig in on HDS and EMC product claims

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Following EMC Corp.’s storage announcements last week, which included the introduction of a new Symmetrix array, the industry has been buzzing with the claims and counterclaims of EMC and high-end disk array rival Hitachi Data Systems (HDS), as well as debates over the merits of each company’s products.

In the past week, two storage consultants in the UK have dug into the technical specs of Hitachi’s USP and the new Symmetrix DMX-4. Nigel Poulton over at Ruptured Monkey takes a close look at the pros and cons of Hitachi’s external virtualization vs. EMC’s internal tiered storage. Meanwhile, storage consultant Chris M. Evans discusses the “green” claims being made by both vendors in their recent array announcements.

Nigel concludes that there are pros and cons to both the HDS and EMC approaches, depending on a user’s particular environment, which leads him to ask a very pertinent question:

There is certainly a demand for both [approaches to tiered storage]…When compared to something like Thin Provisioning, which both vendors are working on, implementing the above features would be a comparative walk in the park.

So if it’s not that hard to implement, and by doing so you potentially hang on to your customers, why not pinch your nose and take the plunge?

Too much Kool-Aid might be the answer.

As for Evans, his conclusion is that “neither vendor can really claim their product to be ‘green’.” HDS’s USP, he concludes, still has a higher power cost per-drive than EMC’s Symmetrix. However, he doesn’t gloss over the weakness of using higher-capacity drives (to which every systems vendor has the same access) to make a “green” claim, saying, “customers choosing to put some SATA drives into an array…[will] see only modest incremental power savings.”

Evans is not the first to bring up the need for big vendors to step up their efforts further around power consumption, particularly when mushrooming data retention and compliance archiving requirements mean data management strategies for reducing storage growth are losing their effectiveness. Users at this year’s Storage Networking World conference in San Diego also called on storage vendors to invest in better silicon rather than pushing the issue back onto users and, in essence, blaming them for their storage management practices. Elsewhere, server and PC makers have already begun moving to more efficient power designs within systems, and users, like Evans, are looking for a similar committment from storage manufacturers to built-in reductions in power consumption–rather than lip service about the latest SATA drives.


July 20, 2007  12:30 PM

HP, Quantum in cahoots for LTO-5

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

HP and Quantum put out a press release very quietly a week ago (it crossed the wire at 2:30 on a summer Friday afternoon; hard to fly much farther below the radar than that) announcing that they will be partnering more closely on development of LTO-5 tape products.

The exact terms of the agreement are confidential, though reps from both companies said this week that HP will be handling the “productization” of LTO-5 products, from the selection of components to decisions about product packages, whereas Quantum will be handling the design work for meeting LTO-5 specs.

This news follows on an announcement a few weeks ago that HP will be bundling Quantum’s StorNext file system with its EVA arrays for multimedia storage at production houses including Warner Bros.

As the two companies cozy up–and do so with such an emphasis on confidentiality, at least with this latest agreement–it begs the question: could an acquisition be next?

Like Sun when it acquired StorageTek, HP might be able to boost server sales out of owning Quantum. HP has also been doing battle with IBM lately in storage, and proprietary tape is one thing IBM has that HP doesn’t. (Hence IBM’s bluster a few months ago about being No. 1 in pure storage hardware sales, according to IDC).  Also, Quantum’s products tend to appeal to the midmarket and small businesses, and lately HP’s storage strategy has been moving downmarket as well.

Right now, analysts say there’s probably nothing more to this latest tape deal between the two companies than meets the eye–if there are broader implications, according to Arun Taneja, founder and analyst with the Taneja Group, it’s for the tape market in general. “The reality is that despite tape people shouting, the tape market is maturing and declining,” he said. “For Quantum to develop two distinct tape products with both DLT and LTO is a fool’s paradise in that kind of environment.”

But if a company focused entirely on tape starts to offload production of one half of its tape business, you have to start to wonder. Especially when it’s offloaded to a vastly bigger company, which has a spot open for said products in its portfolio, and which itself has been pushing to continue its momentum in the storage market of late.  For now there hasn’t been any smoking gun we’ve seen pointing to an impending merger, but rest assured we’re keeping an eye on these two.


July 20, 2007  9:21 AM

Not perfect, but good enough

Beth Pariseau Maggie Wright Profile: mwright16

Storage system-based asynchronous replication isn’t perfect, but for many corporations it is good enough. Having just completed researching and writing a feature on the topic of storage system-based asynchronous replication for an upcoming issue of Storage magazine, it appears user adoption of asynchronous replication is no longer a rarity, at least if one believes the storage system vendors.

While I did not speak to every storage system vendor for this report (there are dozens), the ones I did speak to consistently said that anywhere from 30% to 50% of their users employ this technology. To a certain degree, one might expect these numbers from a  storage system vendor like EqualLogic, that includes asynchronous replication as part of its storage system’s base software package. But, when Hitachi Data Systems (HDS) went on the record and said that they are seeing similar adoption rates among their user base, it caught my attention.

Users of HDS storage systems generally need to license asynchronous software separately so it gives some indication as to the value users now ascribe to making copies of their data on a secondary storage system. Though it would take some time and a lot of cooperation on the part of HDS to find out what percentage of their licensed users actually use this feature and on what scale, it does follow that if users paid for it that a high percentage of them are probably using it.

Companies are figuring out they can repurpose money budgeted for tape and offsite storage and instead use it to buy cheaper secondary storage systems with asynchronous software. Companies can then do point in time snapshot of their production data, replicate it offsite and use it for daily backups, faster restores and, in a worse case scenario, recover their application from the data copy.

Is this architecture perfect? No. But companies are running out of time waiting for the perfect scenario and tape is certainly not it. At least in this scenario, recoveries happen much faster than waiting on restores from tapes residing in someone’s warehouse. Companies are looking for a more cost effective means to improve their backup and recoveries without breaking the bank and it looks like for a growing number of companies, storage system-based asynchronous replication is a reasonable compromise between perfection and what is affordable.


July 16, 2007  10:04 AM

Hitachi admits storage virtualization might not be for everyone

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Generally the drumbeat of messaging from HDS is as constant as a metronome: array-based virtualization is the answer. Storage virtualization will heal your environment, bring about peace in the Middle East, and solve global warming.

So when an HDS exec writes a piece on his blog about who might not benefit from storage virtualization, it’s definitely worth a read.

David Merrill, storage consultant and solution architect with HDS since 1996, recently got back from what sounds like a rather thorny customer engagement in Korea. The customer, who is not named, wanted to extend its XP array’s virtualization to legacy systems (the XP being a rebranding of HDS). During a TCO analysis, Merrill writes, “Total purchase cost for the virtualization solutions was, as you can guess, less than a monolithic, but the 4-year TCO costs were higher” due to power and cooling costs, and maintenance costs with legacy systems (“when virtualizing older systems, the old hardware maintenance comes along too,” notes Merrill).

The user still went with virtualization because there was a “tipping point” with 20% storage growth over the next three years during which the virtualization will become more cost effective.  “Moral of the story, be sure to look at many factors when considering different architectures. Just because you can virtualize does not mean that every old system needs to be kept around indefinitely…Your mileage will vary,” Merrill concludes.

Wonder what Mr. T would think of that.


July 12, 2007  12:17 PM

Simpana jumps to the front

Beth Pariseau Maggie Wright Profile: mwright16

42 man years of work and 18 months of development. That’s the amount of time and effort that CommVault put into its  Simpana 7.0 Software Suite announced on June 10th, according to Dave West, CommVault’s VP of Marketing and Business Development.

While it is encouraging to note that CommVault spent so much time on this release, it’s equally sobering to ponder that data protection upgrades now take this much time and effort to complete. But, based upon what enterprise customers have needed for the last 5 to 10 years, this is the first product that comes close to delivering on those requirements.

Consider this. Frank Albi, the President of Business Information Solutions, a records management provider in Cincinnati, OH, manages paper, tape and optical media. In this role, he often is asked to help his clients develop a records disposal policy. He can with a high degree of certainty deliver one for his client’s paper records. Not so with tape and optical media. He does not even know where to begin, because his clients can’t easily identify which files or records are on which media so how can he develop an appropriate disposal schedule for the media? So, customers end up keeping it all — resulting in higher data storage costs and unnecessarily exposing them to future legal discovery costs.

What is compelling about CommVault’s Simpana is that it opens the door to address this dilemma that Albi and many others face.

It combines backup and archive data into one common pool and, using its newly licensed FAST search engine, allows users to search, access and retrieve archived and backed up data stored in this new pool. Since they both use a common policy engine, Simpana can set retention and expiration schedules for any file in the pool. Simpana’s new Single Instance Store (SIS) feature only sweetens the deal since it eliminates redundant file copies, which also reduces the size of data stores and expedites backups.

Granted, to gain Simpana’s benefits administrators need to upgrade or install backup agents on servers – something I always looked forward to as an administrator. Not. But, as CommVault’s West points out, users can deploy them with push technologies. This may take some of the sting out of the deployment plus the value-add of shortened backups and conducting centralized enterprise searches into archives and backups should appeal to most organizations and offset whatever concerns they have.

CommVault’s Simpana also still lacks the breadth and scope of features that data protection products from Symantec NetBackup, EMC NetWorker and Tivoli Storage Manager offer. But, with disk a growing part of the backup equation and e-discovery a shadow over most companies’ future, the features that traditional data protection products offer may not carry the same weight they once did.

Bottom line, for companies willing and able to standardize on a single data protection product, CommVault has jumped to the head of the pack and is the one by which data protection products should now be measured. It can reduce the size of data stores, expedite backup and recoveries and search across multiple data stores. Plus, CommVault offers continuous data protection, email archiving and replication products that administrators can manage through the same policy engine — making Simpana without equal in the industry. CommVault’s Simpana 7.0 Software Suite sets the mark high for data protection and is a template that other data protection products will be hard-pressed to match.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: