Storage Soup


October 8, 2012  7:50 AM

Estimating required storage capacity proves tricky

Randy Kerns Randy Kerns Profile: Randy Kerns

Buying storage is an ongoing process that may be periodic in some environments and seemingly continuous in others. There are several primary reasons given for storage purchases:

  • New application deployments that require storing significant amounts of data.
  • Performance or features are required on storage systems to optimize environments such as server virtualization.
  • A technology transition is necessary to replace systems that have reached the end of their economic life.
  • Additional capacity is required to handle the demand to store more information.

A common decision point in purchasing storage for all of these reasons is how much capacity to buy. There may be different types or classes of storage that segment the purchase, but the question of how much remains.  Finding the answer to this is more complicated than it seems.  It starts with evaluating the requirements.

In working with many IT operations on capacity planning, I’ve seen quite a variety of approaches to coming up with the amount of capacity involved in storage purchases.  The different methods range from elaborate capacity planning models to taking what is asked for by application and business owners and multiplying by 10.

One reason a multiplier is used for deciding on the amount of storage to purchase is that capacity demands continue to increase faster than expected, and failure to meet storage demand immediately has negative consequences.  Another reason is the budgetary issues within companies. There is a feeling that it may be more difficult to get  funding for the purchase in the future. This means IT buyers are not sure when they will get another chance to purchase storage because of a potential “freeze.”

The information typically provided for the amount of storage required may prove inaccurate. An example of this comes from the deployment of a new application that stores information in a database. The systems analyst may have determined that the capacity for the database may ultimately be 20 TB.  The database administrator will request 100 TB, allowing for extra capacity needed for testing and for a buffer in case the systems analyst has underestimated the needs.  The storage admin may double that request for primary capacity to 200 TB and then add another 200 TB for backup to disk target. Now the purchase for a 20 TB primary need has expanded to 10 times that for primary and an equal amount for data protection.

It is becoming rarer for organizations to upgrade existing storage systems. There are several reasons for this, but the main reasons are that organizations don’t want to extend old technology or their economic models for depreciation makes it easier to move to new systems.

In general, storage capacity always gets used – for one reason or another.  Managing it effectively requires effort and discipline. Unfortunately it is not the most efficient process, given the tools required and the time commitment necessary.  And no real revolutionary change to improve the situation seems be in the adoption phase.

The goal for companies is to never run out of required storage capacity.  Mostly, the prediction of how much to acquire is based on hard-won experience.  The best practice is always to purchase storage proactively and not when desperate for capacity.  The other guiding fact in purchasing storage is to keep up with technology changes and make transitions to take advantage of the new developments.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

 

October 3, 2012  2:46 PM

IBM delivers new enterprise array powered by Power7

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM today upgraded its flagship storage system, the DS8870 high-end enterprise array. The big additions are the use of 16-core Power7 controllers and support for 1 TB of usable system cache, which IBM claims gives the DS8870 three times the performance of the DS8800 it replaces.

Ed Walsh, VP of marketing and strategy for IBM storage, said the enhancements allow the DS8870 to perform analytics faster and represent more than a step upgrade from the DS8800.

“The platform is brand new,” Walsh said. “I think it should be called the DS9000. We can do operational analytics faster. This allows us to be predictive.”

Outside of the processor, however, the DS8870 has a lot in common with the DS8800. They support the same number of drives and capacity (2.3 PB), sixteen 8 Gbps Fibre Channel or FICON ports and from two to 16 host adapters. Other improvements include the DS8870 ships with all Full Disk Encryption (FDE) drives (customers can turn off encryption if they don’t want it) and greater VMware vStorage API for Array Integration (VAAI) support than the DS8800.

The performance boost will have to be impressive for IBM to compete with EMC’s VMAX and Hitachi Data Systems Virtual Storage Platform (VSP), considering that IBM has been dropping storage market share to those vendors.

IBM submitted benchmarks to the Storage Performance Council (SPC) to validate its performance claims. The DS8870’s SPC-2 score of 15,423.66 MBps is the highest for that benchmark, which measures aggregate data rate for large file processing, large database query and video on demand workloads.

For maximum I/O request throughput, the IBM DS8870’s SPC-1 score was 451,082.27 IOPS, the top score for an enterprise array without any solid-state drives (SSDs). The DS8870 does support SSDs, but none were used in the benchmark testing.

EMC doesn’t submit benchmarks for VMAX. The HDS VSP had an SPC-2 score of 13,147.87 last month and an SPC-1 score of 269,506.69 last November. However, HPS had better price/performance scores on both benchmarks because the systems it used cost roughly half of IBM’s for both tests.

Analyst Greg Schulz of StorageIO Group said the performance boost is expected of a new system but said the FDE support can also prove valuable.

“What catches my eye in addition to the usual performance and capacity improvements are the standard support for full disk encryption (FDE) to protect data at rest, and to also reduce the TCO and improve the ROI for storage systems when it comes time to dispose of hundreds and thousands of disks,” he said. “Depending on how it’s deployed, FDE has the potential to shave not just hours, but days and weeks off of the time or cost associated with running secure erase at the end of a product’s useful life. That’s something that organizations should be doing.”


October 3, 2012  10:49 AM

Riverbed’s enhanced remote-office appliances include vSphere support

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Riverbed Technology today announced that the Steelhead EX remote-office WAN acceleration appliances now come with VMware vSphere 5 support, while introducing a new version of its Riverbed Optimization System (RiOS) operating system. The company also introduced two new Steelhead core WAN acceleration appliances.

The Steelhead EX devices work in conjunction with the Granite remote block-based storage appliances Riverbed launched in February. Granite centralizes the local storage in the Steelhead devices. Files can be written directly to Granite, which does asynchronous writes to the storage array in the data center.

“Granite presents data center storage locally, so you can scale up and scale down the amount of storage from the data center,” said Miles Kelly, senior director of product marketing at Riverbed.

The Steelhead EX’s integration with VMware vSphere hypervisor lets administrators use vCenter to centrally manage virtual machines. Previous EX models were integrated with VMware Server, which didn’t include vCenter for central management of VMs. Kelly said up to five virtual machines can be managed on an EX appliance.

Riverbed also added new Steelhead CX pure WAN acceleration devices. The Steelhead CX5055 and CX7055 appliances have more TCP connections than the 5050 and 7050 models that they will replace. The 5055 and 7055 series contain solid-state drives (SSDs) for better performance. Previously, only the 7050 contained SSDs.

RiOS 8.0 has been upgraded to automatically recognize and control more than 600 applications, and has a new quality of service capability to prioritize PCoIP traffic, which is the protocol used for VMware View.


September 27, 2012  7:47 AM

Don’t let your data center turn into a storage museum

Randy Kerns Randy Kerns Profile: Randy Kerns

Did you ever have a visitor to your data center say, “I didn’t know any of these systems were still around?” The implication here is that your data center is one step away from being a museum. Having “museum worthy” systems is not a badge of honor. It means that the systems in the data center are probably not delivering optimal value.

For a storage system, this can be especially bad. Disk systems are typically in use no longer than five years. There’s good reason for these systems to have a limited lifespan. They are electro-mechanical devices that wear out when in constant usage.

Another reason not to let a disk storage system grow old is the continued advances in technology that allows more data to be stored in a smaller space with less power and cooling requirements in a new disk system. Every new disk generation, which changes on about an 18-month cycle, adds to the storage efficiency equation. New disk systems increase performance and often add new capabilities that can be exploited for improved operations. For instance, newer features in current systems include support of APIs for server virtualization hypervisors.

But, some storage systems may still be in use even though there are more efficient systems available. Reasons for storing information on these include:

• They may be used as secondary storage for less critical data.
• The costs may be minimized by not having full maintenance or support and IT has made a decision to take the risks.
• A legacy application may be running that has not yet been virtualized.

Sometimes the five-year lifespan might start just prior to a major technology shift such as the transition to systems that can incorporate solid state technology. In that case, a system that appears to be an artifact because it does not support the latest technology or features really is not old and may have years left before the asset is depreciated. This may be a good candidate to turn into secondary storage.

Maybe there are good reasons why some data centers look like museums. For storage, however, not keeping up with technology can impact in other areas. The older system may lack support for new server virtualization features while consuming more physical space and power and lacking performance requirements for demanding applications.

So take a long look at the museum quality of the storage in your data center. It can be a major indicator of inefficiency … and of optimization opportunities.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


September 25, 2012  12:43 PM

Red Hat CEO calls storage interest red hot

Dave Raffo Dave Raffo Profile: Dave Raffo

Red Hat’s storage platform is still mostly in the testing stage, but CEO Jim Whitehurst said the company took its first six-figure order last quarter and sees a bright future for its software-based clustered NAS.

Whitehurst talked up storage throughout Red Hat’s earnings call Monday, claiming significant interest in Red Hat Storage Server. He doesn’t expect significant revenue until next year but said many customers are running proofs of concepts for storage. He also identified storage as an area where the Linux vendor will heavily invest in, with plans to integrate storage with Red Hat Enterprise Virtualization (RHEV) server virtualization platform.

“We’re excited about the potential to disrupt traditional market plays for big data,” Whitehurst said.

He sees Red Hat storage as a low-cost alternative to hardware-based clustered NAS, such as EMC’s Isilon. Red Hat’s storage technology – acquired from startup Gluster for $136 million last year – can’t match Isilon features, but Whitehurst expects it to be enough for many customers.

“The storage space obviously has some well-established, well-regarded vendors,” he said. “But it also looks a lot like Linux did a decade ago with relatively inexpensive solutions. Not all your unstructured data really needs to fly first class.”

Red Hat has already embraced cloud storage with an appliance for Amazon Web Services (AWS) with plans to expand to other cloud providers.

“Not only do we have a significant cost advantage by being software-based,” Whitehurst said, “we also offer huge amounts of flexibility so you can burst up on the cloud and move your data [to the cloud].”

With clustered NAS, storage integrated with virtual servers and cloud storage, Red Hat certainly bears watching as a storage vendor.


September 24, 2012  7:35 AM

SimpliVity receives $25M to push its converged OmniCube

Dave Raffo Dave Raffo Profile: Dave Raffo

SimpliVity closed a $25 million funding round today, giving the startup ammunition to market its OmniCube converged storage stack due to ship later this year.

SimpliVity came out of stealth in August when it started its beta program for OmniCube, which has storage, compute and virtualization in one box. CEO and founder Doron Kempel said he expects the company to grow from 60 people to around 80 by the end of the year, and the new funding “gives us cash to fuel everything we want to do in 2013 in sales, marketing and engineering.”

One of the things Kempel wants to do is convince people that SimpliVity is unique among converged storage systems. He positions it as primary storage that can do just about everything, replacing the need for discrete devices for deduplication, backup, WAN optimization and cloud connectivity.

“We have defined the new IT building block,” he said. “It’s an accelerated software stack that runs on commodity hardware and one person manages it.”

SimpliVity is among a small group of vendors – Nutanix and Scale Computing are others – using the term “hyper-converged” to describe their systems. Kempel said he is trying to differentiate OmniCube from converged stacks sold by established vendors that combine a group of products that were originally created by different companies or different groups inside of a company.

“Convergence is a nebulous term,” he said. “Everybody and their husband says, ‘We’re converged too.’ We want to establish metrics for framing the convergence market. Not all cars are created equal — there are sports cars, trucks, hybrids. It’s the same with converged systems.”

SimpliVity’s B funding round brings its total to $43 million. Kleiner Perkins Caufield & Byers (KPCB) led the round, and original investors Accel Partners and Charles River Ventures also participated.

Kempel sold his last company, backup dedupe vendor Diligent Technologies, to IBM in 2008. He founded Diligent with Moshe Yanai, who led development of EMC’s Symmetrix platform and founded XIV before selling that systems startup to IBM.

When asked if Yanai was involved with SimpliVity, Kempel laughed and said, “I’m not allowed to talk about people I can’t mention.”

Yanai left IBM in 2010 but may be restricted from working with other storage companies.


September 20, 2012  3:03 PM

Flash startup Virident pockets $26M and storage-savvy CEO

Dave Raffo Dave Raffo Profile: Dave Raffo

PCIe flash card startup Virident Systems closed a $26 million funding round this week, and hired a CEO that signals the vendor is entering a new phase.

Former BlueArc CEO Mike Gustafson is Virident’s new boss, replacing founder Kumar Ganapathy. Ganapathy will remain with the company and work closely with the executive team on business strategy, new product development and strategic partnerships.

Ganapathy’s background is in engineering, while Gustafson ran sales and marketing at Fibre Channel switch maker McData before moving to BlueArc in 2005 and selling the NAS vendor to Hitachi Data Systems last year. The change comes as Virident is ready to make its FlashMax II cards generally available following years of intense product development.

“We were looking for somebody who could take us to the next level as far as sales and marketing, and Mike has a lot of experience there,” Virident’s VP of Marketing Shridar Subramanian said. “He enabled the growth of BlueArc in the NAS space, established a strategic OEM relationship with Hitachi, and was responsible for the acquisition of BlueArc.”

Gustafson will likely pursue partnerships with large storage and server vendors at Virident. Although Mitsui Global Investments led the Series D funding round, it also included previous strategic investors Cisco, Intel and an unidentified storage vendor who industry sources say is EMC. To compete with the likes of Fusion-IO, Micron Technology and LSI, Virident will need the types of OEM and reseller deals those players have with storage and server companies such as Cisco, EMC and their competitors.

Subramanian said Virident isn’t finished with product development either, and will add software products and features to make its products better equipped for the enterprise than its competitors’ devices.

“There are quite a few players in the market with products,” he said. “But it’s pretty easy to put a bunch of flash chips together and claim you have a flash-based product. What’s difficult is to optimize performance and provide enterprise-class performance. A lot of differentiation will be in software, and that’s where we are investing with the new money as well.”

The round brings Virident’s total funding to $76 million. New investor Hercules Technology Growth Capital and previous investors Globespan Capital Partners, Sequoia Capital and Artiman Ventures also participated.


September 20, 2012  8:35 AM

Dell says having one IT vendor simplifies storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Despite recent market trends to the contrary, Dell storage executives maintain customers want to buy storage and servers from the same vendor.

To make this case, they point to a recent survey conducted by Forrester Consulting and sponsored by Dell. That survey of around 800 IT leaders and storage administrators in the U.S. and Europe shows that most see value in buying storage, servers, networking and IT services from one vendor.

That’s not how it’s been working out, though. Recent storage revenue tracking reports from IDC and Gartner – as well as vendors’ earning reports -– show pure-play storage vendors EMC, NetApp and Hitachi Data Systems have gained market share at the expense of Dell, IBM and Hewlett-Packard (HP). Pure-play storage vendors say that’s because they innovate more than server and infrastructure vendors who dabble in storage.

Dell has built its storage business independent of servers, however, with the acquisitions of array vendors EqualLogic and Compellent plus storage software acquisitions. And Dell execs point out revenue from products with their storage IP have increased over the last year. Dell’s overall storage numbers are down because they reflect the loss of revenue generated by Dell’s discontinued OEM deal with EMC.

“There’s another story on Dell’s numbers,” said Travis Vigil, executive director for Dell storage. “PowerVault, EqualLogic and Compellent sales are increasing. With EqualLogic, we went from 4,000 customers to close to 50,000 customers [since 2008]. The Compellent business has also scaled quickly at Dell. When you look at Dell storage IP, we’re gaining share in the market.”

While developing its own storage, Dell is also integrating it with its server and networking technology in converged products. Today marked the general availability of the EqualLogic Blade Array previewed at Dell Storage Forum in June. The Blade Array packages EqualLogic iSCSI storage with PowerEdge blade servers and Force10 MXL switches in a 10U chassis.

Mike Quirin, IT manager for the SAN and VMware for Italy-based transportation company Ansaldo STS, said he tested the EqualLogic Blade Array and will likely purchase a few. Quirin, based in Ansaldo’s U.S. data center in Pittsburgh, Pa., said he uses EMC storage in the data center but finds the blades a good fit for systems sent out to customers with custom applications for monitoring and reporting.

He said the Blade Array lets customers quickly configure the converged system without any IT intervention.

“Most of the solutions we sent out to customers are blade solutions,” he said. “We had a chassis filled with eight blades and separate storage. With the Blade Array, we could send out a data center in a box without external cabling and hassles. I could get this up and running in 15 minutes without any instructions at all.”
Quirin agrees there are advantages to buying equipment from one vendor. He said he bought most of his EMC storage through Dell. “It makes it easier for us to not run around with too many different vendors,” he said.

The Forrester survey of 513 IT storage administrators and 284 CIOs, managers and directors, found that 54% of each group said they see “some value” and consider buying storage from the same vendor they buy servers and networking from. Thirty-four percent of the storage admins and 32% of the CIO group said they do it when possible, but only 9% of each group said they do it exclusively.

Other findings in the survey weren’t exactly shocking. Most IT leaders and storage admins find managing storage a complex task, they want technology that is automated and easier to use, and 48% said they could spend more time developing business strategy if managing storage didn’t take up so much time.

One noteworthy finding was that 85% said they would consider paying more for a storage system if it saves a considerable amount of work time.


September 14, 2012  8:08 AM

Evaluating storage performance requires credible information

Randy Kerns Randy Kerns Profile: Randy Kerns

Almost every conversation about storage includes performance. That’s because storage system performance is important for the responsiveness of applications. Most vendors go to great efforts to provide performance data for their storage systems. This performance data provides valuable information for making decisions about deployment and how particular applications are used.

But the performance information must be credible for it to help make decisions. If the performance data is inaccurate or not applicable for the way the customer will use the system, that vendor’s performance data will be discounted by decision makers in the future. Vendor performance information is greeted with skepticism anyway. Producing inaccurate or inapplicable information quickly turns skepticism into distrust.

For performance information to be useful, the correct performance testing software must be used in a controlled environment that represents the customer applications and configurations. The use case dictates the type of information required, and performance testing software must be capable of reproducing the desired environment. Using the wrong storage exerciser program can give misleading information and misrepresent the performance for a particular application.

A good example would be performance for a Virtual Desktop Infrastructure (VDI) environment. VDI represents a complex workload for storage that changes quickly. A storage system that can respond to changing workloads would have advantages over one that may be excellent in certain aspects but cannot adapt quickly.

The performance testing of storage for VDI environments must replicate the dynamics of the changing VDI workloads. A standard exerciser test program for storage meant to exhibit storage system characteristics by driving I/Os with predefined read/write ratios cannot mimic the actual workload. The only way to accurately get useful information for a storage system’s capabilities in a VDI environment is to use actual workload captured streams that are played back against the storage system. The storage system’s capability to adapt to the complexity of the I/O characteristics can be demonstrated this way. Scaling the workload can show how many virtual desktops the system can support within the acceptable parameters.

For IT personnel making a strategic decision, evaluating performance requires testing in their environment, running industry standard benchmarks specific to the types of applications they use, or using third-party supplied information. Results should only be considered if they are relevant to the application.

Introducing new storage systems into environments represents risks for IT. The big risk is not having the performance to meet the needs. Performance information obtained with relevant testing and test software can help minimize those risks.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


September 12, 2012  11:15 PM

HGST prepares helium-based hard drives to increase density

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Western Digital’s Hitachi Global Storage Technologies (HGST) intends to ship its first 3.5-inch, helium-based hard disk drive in 2013.

A technology that’s been under development for about eight years,  helium-based hard drives are the next evolution in drive technology, according to  HGST’s vice president of product marketing Brendan Collins. Helium will replace air in the Sealed HDD platform that HGST announced today. Helium has one-seventh the density of air, allowing manufacturers to build in seven spinning disks instead of five in a 3.5 inch drive. That could boost capacity in the 3.5-inch form factor by 40%.

“It’s going to radically change the way data is stored,” Collins said. “By using helium, you lower the power consumption by 23 percent while increasing capacity by 40 percent. It’s the same form factor so you don’t have to change anything on the system level.”

Collins said air-based drives are reaching a point of diminishing returns because manufacturers will no longer be able to add tracks to increase capacity. Since air is dense, it tends to affect spinning disks with vibrations. Helium puts less drag force on the spinning disk stack so the mechanical power in the motor is reduced. Helium’s lower density reduces the force buffeting the disks and the arms that position the heads over the data tracks. That means disks can be placed closer together. It also allows data tracks to be positioned closer together to scale data density.

“The sealed helium HDD platform will provide high capacity storage for the next 10 years,” Collins said. “It’s an ideal platform for bulk and cold storage.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: