Storage Soup


March 27, 2012  5:54 PM

Atlantis unveils ILIO for Citrix XenApp

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Atlantis Computing today launched Atlantis ILIO for Citrix XenApp, which helps reduce I/O and latency problems often associated with application virtualization. The product runs on a VMware vSphere hypervisor and is aimed at customers planning to virtualize XenApp 6.5 with Windows Server 2008 R2.

The new product is built on the same codebase as Atlantis ILIO for VDI but this new version is targeted at customers deploying application virtualization. Atlantis ILIO helps eliminate I/O bottleneck because it processes I/O locally within the hypervisor’s memory. It does inline deduplication to reduce the amount of data hitting the NAS or SAN.

Atlantis ILIO for XenApp is a virtual machine that is deployed on each XenApp server and creates an NFS datastore that acts as the storage for the XenApp VMs running on Windows Server 2008 R2.

“We correct the problem the way we do with VDI,” said Seth Knox, Atlantis’ director of marketing. “All duplicate storage traffic is generally eliminated before it’s sent to the storage. “

Torsten Volk, senior analyst for Enterprise Management Associates, said Atlantis ILIO for XenApp helps optimize performance because it sequentializes and dedupes the I/O traffic. He also said support for XenApp will broaden Atlantis’ market substantially.

“There is a much larger customer base for Citrix XenApp compared to the VDI market and only minimal changes to the Atlantis ILIO codebase were required to accommodate XenApp,” Volk said. “Not many are using VDI because the ROI is still unclear, but XenApp is a well-liked and vastly adopted platform that has provided tremendous customer value for over a decade.”

Knox said there are customers who ask for both products, but agreed there will be more demand for ILIO for XenApp.

“There is a much larger install base of people using XenApp,” Knox said. “Many of our customers use both VDI and XenApp, so they asked us to do a version for XenApp.”

March 27, 2012  3:11 PM

Startup Basho heads for the cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Basho today launched an object storage application for service providers and large organizations who want to build Amazon S3-type storage clouds.

According to the vendor, Riak CS lets customers store and retrieve content up to 5 GB per object, is compatible with the Amazon S3 API, has multi-tenancy features, and reports on per-tenant usage data and statistics on network I/O. Pricing for Riak CS starts at $10,000 per hardware node, which comes to about 40 cents per GB for a 24 TB node.

Riak CS is Basho’s second software application. Its Riak NoSQL database is based on principles outlined in the 2007 Amazon Dynamo white paper. While Riak is an open source application, Riak CS is not. Basho added multi-tenancy, S3 API compatibility, large object support and per tenant usage, billing and metering to Riak CS to make it a cloud application.

“We look at ourselves as an arms dealer of Amazon principles [outlined in the 2007 Amazon Dynamo distributed white paper],” Basho CMO Bobby Patrick said. “Riak CS is for large service providers looking for scalability and tenancy, and also large companies that want S3 without AWS [Amazon Web Services]. This is S3-compatible, but for a private cloud.”

He said several large multinational companies are evaluating Riak CS as a method of keeping important data in-house behind a firewall.

Riak CS is built to run on commodity hardware. Patrick said it will compete mainly with OpenStack Swift object storage, but it will also come into competition from EMC’s Atmos and software from smaller vendors such as Scality Ring and Gemini Mobile Cloudian.

“Any hosting company, any telecom company, any infrastructure-as-a-service company, is going to have to evolve from expensive shared storage to cloud storage for economic scale benefits,” Patrick said. “A new architecture is needed for that. They need to do it on cheap commodity hardware and in a way they can manage it.”


March 27, 2012  8:07 AM

DataDirect adds ‘mini’ array for big data

Dave Raffo Dave Raffo Profile: Dave Raffo

DataDirect Networks (DDN) launched two storage systems for people who want to start small in their approach to “big data.”

DDN is known for storage systems that deliver extreme performance and capacity but also carry large price tags. To try to broaden its market, the vendor this week introduced lower-priced arrays, including one that starts at $100,000 during introduction pricing that runs until the end of June.

“We found there are a lot of customers and prospective customers looking to start with DataDirect at a lower price and form factor while benefitting from scalability,” DDN marketing VP Jeff Denworth said.

The new systems are the DDN SFA10K-M and SFA10K-ME. The 10K-M scales to 720 TB with InfiniBand or Fibre Channel networking and  with SAS, SATA or solid-state drives (SSDs). Customers can upgrade the 20u system to the larger SFA10K-X.

The SFA10K-ME is the same hardware as the 10K-M, but can be bundled with DDN’s GridScaler or ExaScaler parallel file systems. The promotional $100,000 price is for a SFA10K-M with eight InfiniBand ports, a 60-slot disk enclosure, and 16 GB of mirrored cache.

DDN says its new systems cost 40% less with a 57% smaller form factor than its larger SFA storage arrays.

“The news of dramatically smaller footprints and reduced-cost SFA entry points is not what we’re used to hearing from a company that is accustomed to extending the scalability and performance envelopes of big data applications,” Taneja Group analyst Jeff Byrne wrote of DDN’s new systems in a blog on the Taneja web site.

Denworth said the new systems fill a gap in DDN’s product line between the S2A6620 midrange storage for media/entertainment and high performance computing and the SFA10K-X high-bandwidth petabyte capacity platforms.

“Customers can grow the system as requirements and budget dictates,” Denworth said.

SFA10K-M customers can upgrade to DDN 10K or SFA12K systems, but they would have to take the systems offline. There are no non-disruptive upgrades.


March 26, 2012  7:59 AM

Why storage has a short lifespan

Randy Kerns Randy Kerns Profile: Randy Kerns

How long does an organization keep a storage system? That depends on a few things. For disk systems, there are several driving factors:

• The length of the warranty period and the cost of a service contract after the warranty period.
• The depreciation period on the system.

These factors usually lead organizations to plan on four or five years before replacing their disk storage system.

For tape systems that use LTO technology, IT generally looks at how long new tape drives can be purchased to read their existing tapes. Each new generation of LTO tape drive can read tapes created on the two previous generations. The period for replacement of tapes (meaning migration of the data on those tapes) to a new generation is based on how long it takes for LTO tape generations to be released. It usually takes around seven years to get to the generation that cannot read the previous two generations.

When I speak with contemporaries of mine in other technology disciplines and reflect on the limited lifespans of storage systems, they find it hard to believe how short the lifespan is for storage systems. They usually say that, with the amount of investment made, a storage system should be kept for at least 10 years.

They understand the shorter lifespan better when I explain the pace that storage technology changes and the benefits from more frequent updates. These include:

• Greater efficiency in power, space, and cooling with new, higher capacity devices
• Improved performance with system support solid state technology
• New warranty periods for new storage systems rather than relatively expensive maintenance contracts for storage systems past their warranty period
• Improved reliability for new systems.

The discussion then shifts to how difficult it is to move to a new storage system, mainly because of data migration. Some storage systems automatically migrate data from an older storage system, especially if the migration is between different generations of the same system. If the migration is not transparent and automatic, it costs more to move to another generation of disk storage.

It gets more complicated when switching to another vendor or a different architecture from the same vendor. The new system may require administrators to provision and manage the storage differently than the old system. Administrators must understand the differences, learn new tools or administrative interfaces, and set up new procedures to monitor and respond to issues. These add to the acquisition cost when calculating TCO (Total Cost of Ownership) and pose a potential risk before being effectively implemented.

IT teams would obviously like a longer lifespan for storage systems, but changes in technology make tradeoffs skewed towards replacements at regular intervals. And as technology progresses, there may be a point that longer lifespan systems have greater economic advantages than what we have now.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 20, 2012  2:57 PM

Which storage cloud is fastest?

Dave Raffo Dave Raffo Profile: Dave Raffo

Do you ever wonder how long it would take to move a dozen terabytes from one cloud provider to another, or even between two accounts in the same cloud?

Probably not, if you’re sane. But maybe you do if you have data in the cloud and think you might want to switch one day for performance or pricing reasons. And you definitely do if you’re a cloud storage vendor that promises service levels that might require non-disruptive cloud-to-cloud migration.

Nasuni fits in that last category, so the vendor conducted extensive testing of what it considers the top three cloud providers based on the stress testing it conducted last year. The latest results are entered in its Bulk Data Migration in the Cloud report issued today.

In case you were wondering, here’s how long Nasuni estimates it would take to migrate a 12 TB volume:

• Amazon S3 to another Amazon S3 bucket: Four hours
• Amazon S3 to Microsoft Windows Azure: 40 hours
• Amazon S3 to Rackspace: Just under one week
• Microsoft Windows Azure to Amazon S3: Four hours
• Rackspace to Amazon S3: Five hours

Nasuni CEO Andres Rodriguez said transmission speeds vary depending on time of day, but the biggest difference is the cloud providers’ write capabilities because S3 had by far the best transfer times.

Nasuni determines the best back-end cloud for its customers, and usually selects S3 with Azure as the second choice. Nasuni’s competitors sell storage appliances and let customers pick their cloud provider, but Rodriguez said Nasuni picks the cloud provider to meet its SLAs.

“Our enterprise customers using storage in their data centers let Nasuni be the one to move data,” he said. “All customers want from Nasuni is storage service. They don’t care about which cloud it’s unless they want data in a specific geographic location. But that’s a location issue, not a provider issue.”

That means Nasuni customers can’t decide to switch providers based on pricing changes, but Rodriguez said he doesn’t recommend that practice.

“This is not an operation you want to be doing dynamically daily so you can save a few cents here and there,” he said. “You do it to take advantage of better features and performance.”


March 19, 2012  7:57 AM

TCO vs. ROI: Remember transition costs

Randy Kerns Randy Kerns Profile: Randy Kerns

While talking to value added resellers (VARs) recently about selling storage systems, I noticed their presentations about vendor products featured return on investment (ROI) calculations.

These ROI calculations focused on cost of the solution, savings in maintenance, floor space, power, and cooling, performance gains that enabled business expansion or consolidation, and savings in day-to-day administration.

But limiting the economic view of buying new storage technology to ROI does not represent the true financial impact of the transition. Investment in technology also requires a time element to be considered. A specific technology has a lifespan that is dictated by other factors such as warranty periods (and associated service costs), technology replacement and the subsequent unavailability of the earlier technology.

For these reasons, information technology professionals generally focus on total cost of ownership (TCO) when evaluating storage. TCO includes the time element and the transition costs. Notable factors included in TCO calculations include product costs divided by the number of years the product will be in service, data migration costs and operational and administrative costs over the lifespan of the product.

TCO gives a more accurate look in the evaluation of a technology deployment. For example, the cost of one technology product may be less expensive than another but the transition costs may actually make it more expensive. The lifespan is a big factor. If the lifespan is relatively short, that could make it hard to base a decision on product cost.

Vendors recently began trying to address transition cost in storage systems by adding a built-in capability to non-disruptively migrate data to the new technology system. This has become a differentiating characteristic of primary disk storage systems but its impact is limited in archiving and non-existent in tape systems.

Evaluating storage technology solutions must go beyond the simplistic cost of the solution and use of a purely economic measure such as ROI. More detailed evaluation must be done with different factors. Looking at the past technology change rate can be a good predictor for assigning the longevity expectation. TCO, with the correct elements included, can reflect the real costs between different storage technologies and assist in making a more informed decision.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 14, 2012  8:43 AM

Dell offers deals for EMC, NetApp customers

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell is trying to bolster flagging storage sales with a trade-in program that offers cash credits and improved lease terms to EMC and NetApp customers.

The Dell Storage Swap program launched today promises price breaks for organizations willing to retire EMC VNX, Clariion and Celerra and NetApp FAS arrays to move to Dell Compellent and EqualLogic storage before July 31. Dell pledges “specialized migration services,” support and “other financial incentives” for customers who switch.

When a vendor offers such a swap program, you can bet sales are not strong. That is the case with Dell, which has lost market share since ending its OEM relationship with EMC after acquiring Compellent last year. The Compellent deal followed Dell’s 2008 purchase of EqualLogic. Now Dell is banking on customers switching for the incentives and becoming happy enough with the results to stay with Dell long-term.

Even if Dell’s formal swap program is new, the strategy isn’t.

Christopher Patti, director of technology for AccuWeather, said Dell gave him a good enough price to switch from EMC Clariion and Hewlett-Packard EVA storage to EqualLogic in 2008. That was merely months after Dell bought EqualLogic and while it was still an EMC OEM partner. Patti said upgrading his older Fibre Channel arrays from EMC and HP would have cost in the high-six figures. He bought two EqualLogic iSCSI SANs and has since added four more EqualLogic arrays.

“The Clariions and EVAs were ridiculously expensive, especially with day-to-day maintenance, upgrades and extra costs for replication, snapshotting and other things here and there,” Patti said. “Dell gave us a good price point.”

He said he also likes that EqualLogic data protection and management features are part of the base price and not add-on licenses. “Dell’s software makes it easy to manage the array, see where bottlenecks are and know when you have to purchase additional capacity,” he said.

Server vendors Dell, HP and IBM are losing share in the storage market to pure-play storage vendors EMC, NetApp and Hitachi Data Systems (HDS). According to Gartner’s worldwide external disk storage revenue report released this week, Dell’s storage revenue dipped 0.2% from 2010 to 2011 while the market grew 9.8%. Dell’s market share slipped from 8.2% to 7.4% during the year and it stands sixth behind EMC, IBM, NetApp, HP and HDS.

The targets of Dell’s trade-in program, EMC and NetApp, ranked first and second in revenue growth for last year.


March 12, 2012  9:56 AM

Amazon, Google, Microsoft slash storage cloud prices

Dave Raffo Dave Raffo Profile: Dave Raffo

When Microsoft Windows Azure dropped pricing for its cloud service last Friday, it marked the third cloud price cut of the week. Google and Amazon also dropped prices for storing data on their clouds earlier in the week.

All of this price slashing shows these companies are serious about getting enterprise data into their clouds. But SearchCloudStorage.com assistant site editor Rachel Kossman reports that customers need to do more than just look at a provider’s published list when cloud-shopping. They need to match their use cases to the way the providers price specific transactions or they could be in for a surprise when the bill comes.

Check out her story for details on all the news prices and more tips on getting the best price from cloud storage providers.


March 9, 2012  8:33 AM

Western Digital, Hitachi GST make it official

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital’s $4 billion-plus acquisition of Hitachi Global Storage Technologies (HGST) officially closed today – a year and two days after the hard drive vendors first declared their intention to merge.

Western Digital is paying $3.9 billion in cash and 25 million shares of its common stock currently valued at $900 million for HGST, the world’s second largest enterprise drive vendor.

The deal had to clear regulation hoops from regulatory groups around the world because the deal makes the combined companies the largest hard drive vendor with 47% of the market, surpassing Seagate’s 32% market share.

HGST owns 9% of the enterprise market compared to Seagate’s 56%. Western Digital has only 1% of the enterprise market and 30% of the overall hard drive market without HGST.

Western Digital said it will operate HGST as a subsidiary, and it will maintain the HGST brand and separate product lines.

HGST’s enterprise products include solid-state drives (SSDs), and it scored a big win this week when it revealed that EMC is shipping Hitachi Ultrastar SSD400S single-level cell (SLC) 2.5-inch SAS SSDs in its VNX unified storage arrays.

Mitch Abbey, HGST’s senior enterprise product line manager, said he expects more SSD qualifications with EMC and other storage vendors. He also said HGST has cheaper multi-level cell (MLC) drives in the works and is reviewing the PCIe market for server-based flash to determine if it’s worth putting out that type of product.


March 7, 2012  11:04 AM

Risks from IT changes are real

Randy Kerns Randy Kerns Profile: Randy Kerns

There is a great hesitancy on the part of IT in data center optimization initiatives due to the risks involved. IT pros are aware of the problems that can occur and almost everyone has painful stories to tell of things that have gone wrong. Even with the great benefits from data center optimization, the hard-earned experience IT pros gain prompt them to carefully consider the risks.

The risks are the negative outcomes and their consequences that can occur with any project. Unforeseen issues that arise are the most feared. These include factors beyond the influence of IT, such as delays in construction and power upgrades.

Another area of risk for IT comes from vendor solutions deployed for data center optimization projects. The vendor product may not work as advertised. For IT, there is“this has happened before” sensitivity. For vendors, there may be unique usage environments or changes in what was originally planned that can cause the system to be less than optimal and cause IT and vendors to scramble to address the new situation. Another risk that can have an impact is the loss of key staff at an inopportune time.

The positives of data center optimization initiatives include economic gains from greater efficiencies and capacity expansion. The efficiencies come from better use of server and storage resources, greater automation, and simplification that can lower administrative costs. Server utilization is addressed primarily with server virtualization and storage systems capable of supporting increased demand from the virtual machines running on a physical server. Storage efficiency is addressed with thin provisioning, data reduction, and tiering. Other efficiencies include better storage management and data protection.

The risks and the potential impacts from IT projects are real. We saw an example of that last weekend when a merger of major airlines (really a takeover but merger sounds nicer) led to a transition to a single reservation system and a single scheduling system. Even with more than a year for IT to prepare, big problems occurred causing delays and cancellations. Just to give a personal perspective on the problem:

• For my flight the next morning after the switch over, I could not check in because there was a problem with my reservation. I was supposed to call a customer service number to have it straightened out.

• There weren’t enough agents to handle the number of IT-related problems, resulting in a long hold time.

• The time to fix the problem was longer because the agent said she was learning a new system and the system seemed slow.

• I learned of a possible future problem because my mileage accumulated over years of flying on the airline was not transferred to the new system. A note said that this would get fixed in the next three days.

The person on the phone said at least three times that everything was going well in a scripted statement. Hearing that while the problems continued was at first amusing and then irritating. It did not take long to get to the infuriating stage.

These IT problems were significant and appear to be continuing, although the number of flight problems decreased between Saturday and Sunday. The risks were great and problems — expected and unexpected — occurred. These results show that the hesitancy of IT to make changes because of risk is well justified.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: