Storage Soup

A SearchStorage.com blog.


February 15, 2011  4:38 PM

Anatomy of poor IT buying decisions



Posted by: Randy Kerns
IT decsion-making

 

I had an interesting conversation about IT decision-making while attending an education foundation meeting that was focused on education. The meeting was a chance to speak with CIOs and other senior IT people in a non-business setting. These discussions can be far-ranging and unguarded. In this non-sales, non-consultative environment, the discussion came around to the challenges each executive faced and the risks they had in their positions. 

 

The challenges included personnel issues as well as the products and solutions they had deployed in their IT operations. While it was interesting to hear about what systems (my focus was on storage but they talked about complete solutions that encompassed hardware and software) they had recently purchased, the more interesting discussions were about why they had made these purchases.

 

One theme came through frequently – many felt they had been stampeded into making a decision and purchase. They felt stampeded in the sense that they weren’t given time to do an adequate (in their opinions) evaluation to make an informed decision.

 

The reasons for not being given enough time included their receiving pressure from above saying the company required immediate action, and business would be lost if the solution was not delivered immediately. Organizationally, a group (internal customer for example) would look elsewhere if it failed to get an immediate solution.

 

“Look elsewhere” took the form of outsourcing, bringing their own systems to meet the needs, or threats of withdrawing budgetary support.

 

One person in the group said he was being forced to make “haphazard decisions” rather than informed ones. And these haphazard decisions included no thought of how this would affect the overall strategy or IT operations. The use of the word haphazard struck me as a somewhat alarming term. 

 

 

From my standpoint, this looked like a great opportunity to do a case study of what the lack of a strategy or formal process can mean to IT.  Asking what the results were for these stampeded or haphazard decisions did not yield a clear answer, however.  It became obvious that the real effect may not become apparent for a few years. The initial responses were more emotional (and negative) than actual quantified results. 

 

What I took away from this discussion was that decisions were still being made that were not informed and not really strategic. Whether they were bad decisions or not would take some time to find out. The people making the haphazard decisions were aware they were not proceeding correctly, and were embarrassed by it. But they felt they had no real choice. They had to make a decision and needed whatever information they could access quickly given the time they had. 

 

Fortunately, the majority of decisions about purchasing systems and solutions (especially storage systems) do not take this route. But the reality is some decisions are being made haphazardly, and that can present problems.

 

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

 

 

 

 

 

 

 

 

 

 

February 10, 2011  5:28 PM

Take a hard (and soft) look at storage project ROI benefits



Posted by: Randy Kerns
storage efficiency: storage ROI

While working on a project for the IT organization of a large company recently, I began questioning the information it included in its Return On Investment (ROI) calculations. The organization used ROI as a measurement to buy storage as part of a storage efficiency project, but I found problems with the measurement based on what was included in the calculations.

Before explaining the difficulties I had with the information they used, let’s first look at what is typically included in ROI calculations. The definition I give for ROI is: the assessment of return on investment money for savings or gains from project implementation. For a storage project, the ROI calculations include:

  • Cost of the solution – hardware and software plus implementation costs,
  • Savings in administration and operations, and
  • Gains in increased business, productivity, customer service, etc.
  • ROI is usually expressed as a percentage of gain with a payback over a given period of time. I can still hear a salesman saying, “It pays for itself in just 11 months.” That’s the way ROI is used as an economic metric for decision making.

    But, there are two types of gains usually included in ROI calculations. They are called hard benefits and soft benefits. The hard and soft benefits are usually defined like this:

    Hard benefits include capital savings or deferred capital investments, operational savings, and productivity improvements.

    Soft benefits consists of opportunity costs such as maximizing revenue potential by making enterprise information and applications more available, and cost avoidance through minimizing down time.

    It is easy to add items — taking credit for potential economic benefits — or make your numbers overly optimistic in soft benefits. I’ve seen this become a game of liar’s poker in a competitive situation. Because of that, I recommend not including soft benefits while making a decision. Many times the soft benefits are being listed by a party with a lot to gain. It is best to leave those soft benefits as potential opportunities – preferably just listed but not quantified.

    Soft gains can still be used to help validate the accuracy of the project planning and for future corrective action. That’s why as part of the closed-loop process we cover in our Evaluator Group classes, we recommend including soft gains in the actual returns at the completion of the ROI time period.

    The important thing to take from this is that using ROI for economic decision-making needs to have a more discerning review of the inputs, and the review process must include a validation of the underlying assumptions.

    (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


    February 9, 2011  2:10 PM

    EMC prepares block-level primary deduplication



    Posted by: Dave Raffo
    emc; primary deduplication

    Although it may have seemed like it, EMC didn’t actually upgrade its entire product portfolio during its much-hyped product launch last month. During an investors’ event webcast Tuesday, EMC executives said they have more product upgrades coming at EMC World in May, including primary block-level data deduplication.

    EMC president Pat Gelsinger called primary block dedupe the missing piece in one key part of EMC’s strategy — aligning storage with VMware.

    “There is one feature NetApp has that we don’t have – primary block deduplication,” EMC president Pat Gelsinger said. “We’ll have that in the second half.”

    Gelsinger said block primary dedupe is just one piece of the data reduction picture, but the rollout will “close that last piece of competitive gap that we don’t have.”

    EMC is the market leader in backup deduplication with its Data Domain and Avamar products and offers compression for primary data, but NetApp beat it to the punch with primary dedupe across its FAS storage arrays. Other vendors are also working on delivering dedupe for primary data. Dell acquired the technology when it bought Ocarina last year but hasn’t delivered a timeframe for integrating it with its storage systems. Xiotech and BlueArc have signed OEM agreements with Permabit to use Permabit’s Albiero primary dedupe software. IBM is going the compression route for primary data with its Storwize acquisition. EMC’s primary dedupe announcement may pressure its competitors to move forward with their products.

    Gelsinger said other products coming this year include VPlex with asynchronous replication for distances and a Symmetrix VMAX refresh, which he said will be “expanding down and across emerging markets.”

    Whatever products EMC delivers this year, you can expect the vendor to put a cloud twist on them. CEO Joe Tucci made the cloud the big theme of his presentation Tuesday.

    “We did not invent the cloud, but we recognized the opportunity of the cloud early,” he said. “We believe this will be the next big game-changing trend in the IT industry. It’s a wave of disruption, and I don’t believe there will be an IT segment that will be immune to this wave.”


    February 7, 2011  7:49 PM

    Violin gets funding from Toshiba, Juniper; HP too?



    Posted by: Dave Raffo
    Violin Memory; solid state storage; flash memory; Juniper Networks; Toshiba; hewlett-Packard

    Flash memory array startup Violin Memory today said it received $35 million in funding, naming NAND flash maker Toshiba America and networking vendor Juniper Networks as investors. There were other investors not named, including one that sounds a lot like Hewlett-Packard.

    Violin’s press release said Toshiba and Juniper were investors, “along with other corporate partners, crossover investment funds, high net worth industry leaders and private equity general partners.” In an interview with StorageSoup, Violin Memory CEO Don Basile said the other investors include a “large storage and systems company that doesn’t want to be named.”

    Basile wouldn’t confirm or deny if that partner was HP, but said the investor was a company that Violin has worked with. HP last June posted blazing fast online transaction processing (OLTP) benchmarks using four Violin Memory Arrays with HP ProLiant Blade Servers, marking the first time HP used another vendor’s memory system in that type of benchmark. There have been whispers as well as published reports in recent weeks that HP and Violin are working on deepenning their relationship.

    Basile said the new investors will lead to partnerships that will boost Violin’s distribution. He said two major trends are poised to drive Flash storage implementation. He credited Oracle CEO Larry Ellison with identifying the first trend last year when he said Oracle would use flash to speed up its databases. The second trend Basile sees is for flash to provide a foundation for building private clouds. That’s where Juniper comes in.

    “Juniper exists in the highest end of data centers, and paired with a memory array like Violin’s, it can enable private clouds to be built,” Basile said. “We have [flash] in processors, and now we’ll have higher performance networking and I/O tiers to match those processors. This is something that will start to be talked about this year, and in 2012 people will start to roll out cloud services based on these technologies.”

    Basile said the funding will allow Violin to double its headcount this year, mostly by expanding its sales and engineering. The company current has just over 100 employees. He also promised new product rollouts in the coming months. Violin’s current products include a 3200 Flash Memory Array, a 3140 Capacity Flash Memory Array, and a vCache NFS caching device built on technology it acquired last year from Gear6.

    Violin and Toshiba announced a strategic partnership last April, disclosing that Violin is using Toshiba’s NAND flash chips.


    February 3, 2011  6:36 PM

    New WhipTail CEO anticipates rise of solid state, demise of spinning disk



    Posted by: Dave Raffo
    whiptail; solid state storage

    Solid state storage startup WhipTail Technologies is looking to move into its next phase of development after hiring a new CEO this week.

    Dan Crain, whose 27 years in the IT business includes five years as CTO of Brocade, takes over at WhipTail with the belief that the spinning disk era in enterprise storage is coming to an end. Crain said it’s a matter of time until the right type of solid storage takes over, and he believes WhipTail’s I/O acceleration appliances are on the right track.

    “Without a doubt, we’re getting to the end of spinning media and disk,” he said. “It’s been around for 40 years, and the physics are getting more challenging. Solid state as a medium for persistent storage is getting there.”

    Crain said he knows just having solid state in a storage system isn’t enough. All of the major storage vendors do that now, and smaller vendors such as Nimbus Data, Texas Memory Systems, Avere Systems, Violin Memory, and Alacritech have all-solid state disk (SSD) systems.

    Whiptail’s Data Center XLR8r and Virtual Desktop XLR8r 2u multi-level cell (MLC) appliances use NAND and DRAM, and focus on block storage, particularly applications with high I/O requirements. Crain said he sees all-SSD appliances the way to go rather than hybrid systems that use solid state and spinning disk as separate tiers. He said the market will take care of one SSD hurdle – price – while superior technology can overcome reliability issues.

    “A lot of tiered systems exist because the cost of solid state storage media was very, very, very expensive,” Crain said. “The price has fallen substantially now. With Flash-based disk, NAND-memory based SSDs, the more you write to them, the more they wear out. We have patented technologies that help deal with that. Hybrid systems that move things in and out of SSDs cause write wear. We don’t think that’s the way to go.”

    Crain said the Data Center appliance is tuned around database functions such as transaction logs and indexes, while the Virtual Desktop model is tuned for virtual desktop performance loads. “We’re not going to be the production data store for databases, we will be the index and transactional log storage device,” he said. “We keep system files, not user data.”

    Since it started in 2008, WhipTail has been run by founders Ed Rebholz and James Candelaria. Crain replaces Rebholz as CEO, while Candelaria stays on as CTO. Crain said WhipTail is still in the early stage of product sales with “several dozen” customers, and he expects to rollout product enhancements soon.

    “We have a lot of growth to do over the next two or three years and you’ll see us produce a lot of interesting technology,” he said. “Some of it will be very radical.”


    February 2, 2011  6:34 PM

    Mozy customers balk at cloud backup price increase



    Posted by: Dave Raffo
    mozy; cloud backup; price increase

    When EMC’s Mozy told customers this week it was raising prices on its home cloud backup service, company execs said the hike for most customers would be $1 per month. That would come to $5.99 for 50 GB of online backup.

    Well, scores of angry MozyHome customers say 50 GB won’t come close to cutting it for them. The support forum on Mozy’s web site has more than 50 pages of posts from disgruntled customers who claim their costs would rise drastically under the new plan, calculating they would have to pay up to thousands of dollars a year to continue with Mozy. Many customers who have posted on the Mozy site said they have discontinued the service, with others threatening to do so.

    Mozy reps said the price increase is necessary because customers are storing more photos and videos online, and backing up multiple computers.

    Instead of charging $4.99 for unlimited backup, Mozy’s new entry level pricing is $5.99 for 50 GB on one machine. Mozy suggested most of it customers would find that plan satisfactory. Customers who need more online storage must pay $9.99/month for 125 GB on up to three devices, plus $2/month for each additional device and $2/month for each additional 20 GB of storage.

    The new price goes into effect immediately for new customers. Existing customers can keep the unlimited pricing until the end of their current contracts.

    Mozy CMO Russ Stockdale said the company hasn’t changed pricing for its MozyPro (business) service, because its customers store mostly business files and not photos and videos.

    “On the consumer side, pressure’s been there because of large files,” he said. “We feel like this is something the industry has to come to terms with.”

    He said competitors and other cloud backup providers have size limits on files they will back up, or throttle bandwidth to keep prices down. “We’ve concluded those aren’t the right was to do it,” he said. “We provide the same quality of service across all file types and storage allocations.”

    Stockdale also said Mozy surveyed customers before making the price increase and got the impression they thought the hike was fair.

    Many customers obviously do not think so. Here’s a sampling of comments from the Mozy forum:

    “I almost laughed when I got the email about the new plans from Mozy. Anyone that knows enough about computers to back up their files remotely has way more than 50gb worth of stuff worth backing up.
    I understand that, in any field, unlimited plans eventually go away. But usually companies do so by replacing them with a capped plan that’s still virtually unlimited for most users. We’ve seen this with cell phone carriers and ISPs capping their unlimited plans at heights that only effect the top 0.01% of users.

    But here, going from unlimited storage to 50gb of storage in a time when you can’t even buy a 50gb hard drive anymore because it’s so small? That’s a ballsy move.

    Going from unlimited to 50gb would have been a fair move 10 years ago. Now, not so much.”

    “Someone else made a great point earlier in this thread that it is sad that my iPod can backup more than 50gb! If the folks at Mozy think 50gb of data is a reasonable “average user” threshold to set their baseline, they must still be living in 2003.”

    “I thought Mozy was great, but I have over 600GB backed up on them, that’s over a $1,200 a year they want for that with the new pricing plans. I was debating between cloud storage and building my own server … that decision is a lot easier now. I can build a server at a relative’s house and FTP backup and still save money, or I could just run it from my house with a UPS in my detached garage for safety if I was that paranoid. … For $200, I could get 2tb backup with a dock, that’s a lot faster than Mozy and no constant resource drain.”

    “This new pricing structure is ridiculous! I currently have 275 gigs backed up and this will jump my monthly fee from $4.5 to $30!! I can buy a small fireproof safe and a good external for that kind of money. I understand that things have changed since you started, but gradual changes or a better pricing structure might have worked better. Hitting loyal customers with a huge increase is just going to drive them away to other alternatives.

    Only 50GB?! Seriously – only casual computer users only have 50GB of files and media, and those type of users aren’t savvy enough to even think to backup their files.”

    “… With this new plan my fee would increase from $210 for 2 pc’s for 2 years to $1680. No way Mozy. My need for storage increase with 100-200 GB per year so if I renewed now the next renewal would run up to a staggering $2500! No way is that going to happen.

    So my plan is to buy two external 2 TB hd’s and use them for offsite storage at my workplace. I will backup to one and the other will be offsite and once a month I will switch them. And just to clarify my online backup is NOT my only backup it is just a convenient offsite backup.

    And for the people that are considering a competitor please remember that if you have more than 200 GB then your upload speed will be reduced to only 100 kbit/s. So for me they are not really an option. It took long enough to complete the initial backup here on mozy with a 2 mbit/s upload line.”


    February 2, 2011  3:58 PM

    CommVault CEO: Industry has shifted to snap-based backups



    Posted by: Dave Raffo
    CommVault, data backup; replication; snapshots; data deduplication

    CommVault CEO Bob Hammer pointed to his company’s strong sales results last quarter as a validation of Simpana 9 released in October and a change in the way organizations approach backup today.

    CommVault Tuesday reported revenue of $84 million last quarter, up 18% from last year and 11.2% from the previous quarter. The results were a big jump from two quarters ago when CommVault slumped to $66.3 million in revenue when it missed its projections by a wide margin. CommVault also seems to have gained on market leader Symantec, which last week reported its backup and archiving revenue increased five percent over last year.

    While Simpana 9 drew a lot of attention for adding source deduplication to the target dedupe in the previous version, it also leans heavily on replication and snapshot technology to create recovery copies of data without moving the data. These technologies were cited by Gartner in its most recent Magic Quadrant for Enterprise Disk-Based Backup/Recovery that placed CommVault at the head of the leaders group.

    “The old backup methodology of managing the backup stream is no longer efficient,” Hammer said. “The backup copy today is your persistent long-term archive copy. The industry has shifted into managing data using snaps and replication as the primary way of managing and moving data. We can seamlessly create a long-term non-corrupt point-in-time copy. The backup business is now a more comprehensive data management business.”

    Hammer also said CommVault will not follow Symantec’s lead of selling its backup software on branded appliances. CommVault sells Simpana on hardware from partners such as Dell, and Hammer said he will continue with that strategy.

    “We don’t want to get into the hardware business, for a lot of reasons,” he said. “At least not on our own. We will provide our software to partners who embed it and sell it on hardware.”

    CommVault’s sales spike last quarter came despite a drop in revenue from its Dell OEM deal. Dell accounts for 20%of CommVault revenue, but it decreased 13% year over year and 10% sequentially last quarter. Hammer said that was in part to a drop in sales to federal government, which make up a large part of its sales through Dell.

    As for cloud backup, Hammer said CommVault is seeing large deals through managed service providers (MSPs) while enterprises are taking a slow approach. “But whether enterprises are going to deploy to the cloud yet or not, cloud capability is one of the requirements they’re looking for,” he said.


    February 1, 2011  1:16 PM

    Data center transformation and optimization aren’t the same



    Posted by: Randy Kerns
    data center storage

    Two topics are of top of mind with IT managers and vendors today: data center transformation and data center optimization. These topics come up all the time in my discussions with CIOs and IT managers in regards to their initiatives, and also with vendors in their strategies to deliver solutions to meet demands.

    Unfortunately, some people confuse the meaning of the two initiatives. They are similar, but different. There is a misunderstanding about how they should be applied, and that leads to discussions that start down the wrong track and can take time to rewind and re-focus.

    Let me put some perspective on how the terms should be used, and how they will evolve over time. Data center transformation is a big picture vision about changing the premise of how a data center operates. Think of it as clean sheet of paper discussion about IT. The discussion is really about how the services of compute and storage should be delivered to customers in the data center. The customers could be departments, individual users, or even other companies.

    Discussions about cloud computing have transitioned into data center transformation discussions for most IT professionals. These cloud discussions include public clouds where compute and information storage is done by a service provider, private clouds where the data center delivers its services as cloud-like offering, and hybrid clouds where some services and information storage are done on-premise within the data and some utilize a public cloud.

    For IT professionals, data center transformation requires putting together a new strategy for providing services for today and the future – usually focused on providing IT as a service (ITaaS). For more on data center transformation, see Evaluator Group articles here and here.

    Data center optimization is focused on making the data center more efficient. The optimization may refer to deploying and exploiting new technologies and methods. Data center optimization is approached as an overall goal but is broken down into specific areas. Storage efficiency is one of those areas for optimization that has many elements.

    Data center optimization should include a rigorous and believable measurement system to for making decisions and demonstrating results. Individual areas generally use ROI methodology for the measurements. Overall, a broad TCO calculation is used to show the effect of multiple initiatives.

    For data center transformation and data center optimization discussions, there are differences in time scale, the decision making process, the amount effort involved, and the costs. Transformation is a company-wide directional decision with long-term implications. Consequently, the decision time may be lengthy. Optimization, primarily broken down into individual projects as elements of a larger goal, will have shorter cycle decision processes and be put into effect (at least partially) much sooner. These are important differences to take into consideration.

    (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


    January 27, 2011  1:31 AM

    Quantum upgrades dedupe software, while sales miss mark



    Posted by: Dave Raffo
    data deduplication; quantum

    Quantum today said it is doubling the speed of its data deduplication software, at the same time it admitted sales of large disk backup devices with dedupe last quarter were disappointing.

    Quantum’s DXi 2.0 software will have the same dedupe ratio as the original DXi version, but Quantum claims it has the fastest throughput of any dedupe software on the market. The vendor also eliminated the option to dedupe post-process and failed to add global deduplication capabilities.

    The vendor made its 2.0 release at the same time as it reported earnings, falling well short of its revenue guidance mainly because of disappointing enterprise deduplication system sales.

    Quantum said a DXi4500 SMB appliance running DXi 2.0 software will deduplicate at up to 1.4 TB/hour for NAS and 1.7 TB/hour for the Symantec OpenStorage (OST) protocol, and a midrange DXi6500 with version 2.0 will dedupe at 4.3 TB/hour for NAS and 4.6 TB/hour for OST.

    The DXi software upgrade follows enhancements by EMC Data Domain and Sepaton to their deduplicatio products over the last nine days.

    The first version of DXi let customers choose between inline and post-process deduplication, but DXi 2.0 only supports inline dedupe. Quantum SVP Janae Stow Lee said with the speedier software plus more powerful processors, inline dedupe is fast enough to negate any advantage of post-process. DXi 2.0 still does not support global dedupe across systems. Software deduplication products support global dedupe, as does Quantum hardware rivals Sepaton, Data Domain and IBM for at least two nodes.

    “This is not a clustered system,” Lee said. “There are advantages, but also complexity and cost disadvantages of clustered systems. We have a different strategy for that over time.”

    Quantum will begin selling 2.0 software on its SMB DXi4500a and midrange DXi 6500 systems by March, and it will be available on higher end DXi6700 and DXi8500 hardware over the summer. DXi 2.0 will have the same price as the 1.0 software and will be a free upgrade to existing customers.

    Quantum today reported revenue of $176 million for last quarter, which fell below its guidance of $185 million to $200 million. Quantum executives said the vendor did have record sales of its midrange DXi systems and StorNext file system software and gained share in tape, but had trouble closing large deals with its new DXi8500.

    “We continue to make progress on our growth strategy, but not as much as some people expected,” Quantum CEO Rick Belluzzo said of the revenue miss. “The biggest factor was the enterprise business was substantially down because of deal flow and a transition to the 8500. We had a lot more [DXi] deals, but a lot of small ones.”


    January 25, 2011  5:13 PM

    Cloud storage gateway startup Cirtas gets funding, new CEO



    Posted by: Dave Raffo
    cloud storage; Cirtas

    Three multi-billion dollar storage acquisitions over the past two years have made storage a hot target for venture capitalists, especially startups who deal with moving data to the cloud.

    Cirtas today unveiled a new CEO as well as $22.5 million in a second round of funding. Cirtas is among several cloud storage gateway vendors who launched over the past year or so, and they have been busy with funding. Nasuni ($15 million) and Panzura ($12 million) closed funding rounds last month and StorSimple ($13 million) received funding in September. But with all of the large storage vendors also looking to the cloud, it’s unlikely that the market can support so many startups.

    Cirtas CEO Gary Messiana said it will take more than the ability to move data off to the cloud for one or more of these startups to stand out. He said it is the intelligence in the Cirtas Bluejet Storage Controller that is unique. BlueJet’s controller presents an iSCSI target to servers as if it were an array on a local SAN. It handles encryption, tiering, data reduction and snapshots as well as sending data off to the cloud. Blujet is used for backup data, along with tier 2 and tier 3 primary data. Customers can keep data on the appliance or move it off to cloud service providers such as Amazon, which was a first-round investor in Cirtas.

    “We have cache on our box, we have disk storage on our box, and our algorithms and sophisticated intelligence determines whether each file should reside in cache, the disk array, or we should move it off to the cloud,” Messiana said. “All of our customers are using us to put data into the cloud. Not all of [the data] obviously, but the portion that makes sense.”

    Messiana won’t say how many customers Cirtas has yet, but he said the new funding will be used to beef up engineering , sales, and marketing for the Bluejet product. He replaces founder Dan Decasper, who remains with Cirtas as CTO. Messiana came from Cirtas investor Bessemer Venture Partners, where he was an entrepreneur in residence. He has also been a CEO at Netli and Diligent Software Systems.

    He said he learned a lesson at content delivery company Netli that should help him forge a strategy at Cirtas. “I saw that no large content owner wants to go to a single CDN [content delivery network],” he said. “We believe it will be the same thing for the cloud. We believe large enterprises will want to use multiple providers. Customers don’t want to get locked into a single back end.”

    For that reason, he said, Cirtas will work closely with other cloud providers besides Amazon S3. Its other cloud service partners include Iron Mountain, EMC Atmos and AT&T Synaptic Storage as a Service.


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: