Storage Soup


October 15, 2012  7:41 AM

Amplidata adds denser, faster object storage nodes

Dave Raffo Dave Raffo Profile: Dave Raffo

Fresh off of a CEO change and funding round, object storage vendor Amplidata today added a larger capacity storage node and an operating system upgrade that supports 16 TB object sizes.

The AmpliStor AS36 is Amplidata’s densest, highest-capacity node. It holds 12 3 TB drives – up from 10 on the AS30 – for 36 TB per node and can scale to 1.4 PB in a rack. Amplidata also gave the AS36 a performance boost over its predecessors through addition of the Intel E3 processor and the option to add a 240 GB multi-level cell (MLC) Intel SSD to the storage node. Amplidata previously used SSD in its controllers but not in the storage nodes.

Paul Speciale, Amplidata’s VP of products, said the SSDs are included for routing small files. He said the Sandy Bridge CPUs result in a 40% speed increase over the AS30 because they can sustain full line-rate performance to each node.

The biggest improvement in AmpliStor 3.0 software is the ability to support larger files. The previous version supported 500 GB files, but 3.0 is enhanced for big file customers. Future versions will likely support even larger objects than 16 TB, but Amplidata has to make sure the larger files work with its erasure coding.

“We think our architecture can go higher as far as object sizes, but we have to put it into the test cycle,” Speciale said. “We also have to be able to repair these drives in a reasonable amount of time.”

AmpliStor 3.0 also can rebalance storage on nodes automatically after adding capacity. Previous versions allowed customers to add storage on the fly, but did not automatically rebalance.

Last month Amplidata named former Intel executive and Atempo CEO Mike Wall as its new chief, replacing founder Wim De Wispelaere. De Wispelaere remains with the company as chief technology officer.

Amplidata also received $6 million in funding from backup and archiving vendor Quantum at the time, bringing its total funding to $20 million. Quantum has an OEM deal with Amplidata to sell AmpliStor technology under the Quantum StorNext archiving brand.

AmpliStor products are used in cloud storage as well as for archiving. Speciale said he expects the Quantum deal to drive AmpliStor more into media/entertainment, genomics and government markets where StorNext has most traction.

October 12, 2012  10:29 AM

Astute takes early lead in VDI benchmark scores

Dave Raffo Dave Raffo Profile: Dave Raffo

Astute Networks this month became the second vendor to publish VDI-IOmark benchmarking numbers, and the vendor promptly proclaimed itself the lowest-cost-per-virtual-desktop-storage option in the industry.

VDI-IOmark was developed by the Evaluator Group analyst firm to test storage systems performance running virtual desktop infrastructure (VDI) workloads. The benchmark replicates a storage workload running multiple VMware View VDI instances, and measures the number of VDI users the system supports.

Astute’s ViSX  VM storage appliance supported 400 standard users with a configuration priced at $30,600, which comes to $76.50 per user. The benchmark results showed that a 2U ViSX can provision 140,000 sustained random IOPS that can be shared by all VMs on all hosts over an Ethernet network. Astute ran the benchmark on a ViSX appliance with 2.1 TB of usable capacity, all solid-state drives (SSDs).

Evaluator Group senior partner Russ Fellows said other vendors – fewer than 10 – have run the benchmark but have not made their results public. Hitachi Data Systems published the first set of numbers in January for its BlueArc Mercury 110 NAS array, which came out to $146.19 per user (1,536 users for a $224,546 system). None of the unpublished benchmark results matched Astute’s price per desktop, which is the main reason they remain unpublished.

“Some of the big vendors are afraid,” Fellows said. “They don’t want to publish unless they’re the best.”

Fellows said some benchmarked systems are all flash, while others are hybrids using SSDs and hard drives. He said Astute balanced the performance of flash with a lower price than many other all-flash systems.

“Being all-flash helps a lot,” he said. “Some flash in high-end systems is incredible pricey. Not only does Astute have flash, but it has a competitive price.”

Len Rosenthal, Astute’s senior VP of marketing, said the TCP protocol accelerator chip Astute uses for its iSCSI appliance also plays a big part in performance. “The big difference we have is our Data Pump engine,” he said. “That’s our accelerator for protocol processing, and it allows us to drive up performance in a way that no other storage can. We have dedicated offload technology.”

Rosenthal said he’s confident that no other vendor can beat Astute’s performance at its price. “We wanted to put a stake in the ground,” he said. “If others want to shoot at us, that’s fine. Dollars per VDI is the best thing about our system. Others can throw a $300,000 system at us and beat our system, but [ViSX] is a $30,000 system.”

Astute has set the bar, but the benchmark numbers will become more valuable when we have many more systems to compare. Some vendors and users run the long-standing Iometer benchmark for VDI, but Fellows said that benchmark is useless for VDI workloads.

“It’s a completely fabricated benchmark for VDI,” he said. “Iometer does not produce a workload anything like a VDI user would. It’s not realistic, and it’s misleading to users. There are only a few tools that can generate VDI workloads.”

Other VDI benchmarks include VMware View Planner, Citrix Desktop Transformation Accelerator and Login VSI.


October 10, 2012  9:54 AM

NetApp, Cisco steer reference architectures into ‘express’ lane

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and Cisco are expanding their FlexPod reference architecture concept to SMBs with the introduction of ExpressPod.

The best way to think of ExpressPod is as FlexPod’s little brother. FlexPod, which has been on the market for just under two years, uses enterprise storage from NetApp and servers and switching from Cisco. ExpressPod includes NetApp’s FAS2000 SMB storage and Cisco low-end Unified Computing System (UCS) servers and Nexus switches.

The first two ExpressPod architectures come in small and medium sizes. Both include Cisco UCS C220 M3 servers and Cisco Nexus 3048 switches. The small version uses NetApp FAS2220 storage with 32 server cores and the medium includes NetApp FAS2240 arrays and 64 server cores. Like FlexPods, ExpressPods are pre-validated by NetApp and Cisco and include an implementation guide. The reference architectures are sold by NetApp channel partners.

Adam Fore, NetApp director of solutions marketing, said ExpressPod architectures are designed for companies with fewer than 500 employees. ExpressPods are tested with VMware virtualization software, but Fore said the configurations also support Microsoft Hyper-V and other hypervisors.

NetApp and Cisco cite ease of use and lower cost as drivers for implementing ExpressPod, but they won’t give pricing information. They refer all pricing questions to their channel partners.

NetApp is taking a different reference architecture strategy on the lower end than its main rival EMC. While Cisco is the preferred server partner for EMC’s Vspex reference architecture on the high end, it will push Lenovo channel partners to build Vspex architectures with Lenovo servers at the SMB level.

NetApp also added clustering capabilities from its latest Data Ontap operating system (8.1.1) to FlexPods, allowing them to scale to 24 nodes. And NetApp and Cisco have added a validated FlexPod design for customers running Oracle RAC databases with VMware vSphere and vCenter.

NetApp and Cisco claim they have 1,300 FlexPod customers – up from 175 a year ago.


October 8, 2012  3:14 PM

HP debuts news Ibrix scale-out NAS system

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Hewlett-Packard Co. recently announced a new enterprise scale-out network attached storage (NAS) system – the HP Ibrix X9730 – that scales  to 1.68 PB of capacity in a single system and 16 PB in a single namespace.

The storage system, which replaces the HP Ibrix X9720 model, does typical NAS functions but it is also designed for high-volume, long-term active archiving for unstructured data. The array is three times faster on writes and five times faster on reads compared to the 9720, said Patrick Osborne, a director of product management at HP’s storage division.

“This system is bigger and denser,” Osborne said. “You can deploy a 1.7-petabyte cluster in about two hours from the time you power it on. The system is meant for tier three or four archiving. We are not selling it for high-performance, parallel computing. The software in the system if more for longer data storage but you can use it as an unstructured data repository. It’s a NAS system at the end of the day.”

The HP Ibrix X9730 is 5U and scales up to 16 file server nodes and eight capacity blocks, with each block containing 70 drives. The system now supports 3 TB and 2 TB midline SAS drives, as well as CIFS, NFS, HTTP, HTTP/S, WebDAV, FTP, FTP/S and NDMP protocols. A two-node 210 TB configuration is priced at $223,589 or $1 a Gigabyte, Osborne said.

In comparison, the 9720 scaled up to 1.2 PB. That product is designated as end-of-life but will be supported for five years.

Like the 9720, the 9730 system is targeted for media, entertainment and content depository. It supports archive applications such as Symantec Enterprise Vault and CommVault Simpana. The 9730 comes with the HP Ibrix Constant Validation Software that generates check sums to determine data is not corrupted.

It also comes with a data mobility feature for tiering data in the same namespace based on data access and file type. A WORM data retention capability marks files as retained. HP’s Ibrix operating system software v6.1 streamlines and simplifies the Ibrix storage system deployment so it can be implement in a shorter time. The system is based on a pay-as-you-grow architecture, reducing the chance of over-provisioning.

 


October 8, 2012  7:50 AM

Estimating required storage capacity proves tricky

Randy Kerns Randy Kerns Profile: Randy Kerns

Buying storage is an ongoing process that may be periodic in some environments and seemingly continuous in others. There are several primary reasons given for storage purchases:

  • New application deployments that require storing significant amounts of data.
  • Performance or features are required on storage systems to optimize environments such as server virtualization.
  • A technology transition is necessary to replace systems that have reached the end of their economic life.
  • Additional capacity is required to handle the demand to store more information.

A common decision point in purchasing storage for all of these reasons is how much capacity to buy. There may be different types or classes of storage that segment the purchase, but the question of how much remains.  Finding the answer to this is more complicated than it seems.  It starts with evaluating the requirements.

In working with many IT operations on capacity planning, I’ve seen quite a variety of approaches to coming up with the amount of capacity involved in storage purchases.  The different methods range from elaborate capacity planning models to taking what is asked for by application and business owners and multiplying by 10.

One reason a multiplier is used for deciding on the amount of storage to purchase is that capacity demands continue to increase faster than expected, and failure to meet storage demand immediately has negative consequences.  Another reason is the budgetary issues within companies. There is a feeling that it may be more difficult to get  funding for the purchase in the future. This means IT buyers are not sure when they will get another chance to purchase storage because of a potential “freeze.”

The information typically provided for the amount of storage required may prove inaccurate. An example of this comes from the deployment of a new application that stores information in a database. The systems analyst may have determined that the capacity for the database may ultimately be 20 TB.  The database administrator will request 100 TB, allowing for extra capacity needed for testing and for a buffer in case the systems analyst has underestimated the needs.  The storage admin may double that request for primary capacity to 200 TB and then add another 200 TB for backup to disk target. Now the purchase for a 20 TB primary need has expanded to 10 times that for primary and an equal amount for data protection.

It is becoming rarer for organizations to upgrade existing storage systems. There are several reasons for this, but the main reasons are that organizations don’t want to extend old technology or their economic models for depreciation makes it easier to move to new systems.

In general, storage capacity always gets used – for one reason or another.  Managing it effectively requires effort and discipline. Unfortunately it is not the most efficient process, given the tools required and the time commitment necessary.  And no real revolutionary change to improve the situation seems be in the adoption phase.

The goal for companies is to never run out of required storage capacity.  Mostly, the prediction of how much to acquire is based on hard-won experience.  The best practice is always to purchase storage proactively and not when desperate for capacity.  The other guiding fact in purchasing storage is to keep up with technology changes and make transitions to take advantage of the new developments.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

 


October 3, 2012  2:46 PM

IBM delivers new enterprise array powered by Power7

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM today upgraded its flagship storage system, the DS8870 high-end enterprise array. The big additions are the use of 16-core Power7 controllers and support for 1 TB of usable system cache, which IBM claims gives the DS8870 three times the performance of the DS8800 it replaces.

Ed Walsh, VP of marketing and strategy for IBM storage, said the enhancements allow the DS8870 to perform analytics faster and represent more than a step upgrade from the DS8800.

“The platform is brand new,” Walsh said. “I think it should be called the DS9000. We can do operational analytics faster. This allows us to be predictive.”

Outside of the processor, however, the DS8870 has a lot in common with the DS8800. They support the same number of drives and capacity (2.3 PB), sixteen 8 Gbps Fibre Channel or FICON ports and from two to 16 host adapters. Other improvements include the DS8870 ships with all Full Disk Encryption (FDE) drives (customers can turn off encryption if they don’t want it) and greater VMware vStorage API for Array Integration (VAAI) support than the DS8800.

The performance boost will have to be impressive for IBM to compete with EMC’s VMAX and Hitachi Data Systems Virtual Storage Platform (VSP), considering that IBM has been dropping storage market share to those vendors.

IBM submitted benchmarks to the Storage Performance Council (SPC) to validate its performance claims. The DS8870’s SPC-2 score of 15,423.66 MBps is the highest for that benchmark, which measures aggregate data rate for large file processing, large database query and video on demand workloads.

For maximum I/O request throughput, the IBM DS8870’s SPC-1 score was 451,082.27 IOPS, the top score for an enterprise array without any solid-state drives (SSDs). The DS8870 does support SSDs, but none were used in the benchmark testing.

EMC doesn’t submit benchmarks for VMAX. The HDS VSP had an SPC-2 score of 13,147.87 last month and an SPC-1 score of 269,506.69 last November. However, HPS had better price/performance scores on both benchmarks because the systems it used cost roughly half of IBM’s for both tests.

Analyst Greg Schulz of StorageIO Group said the performance boost is expected of a new system but said the FDE support can also prove valuable.

“What catches my eye in addition to the usual performance and capacity improvements are the standard support for full disk encryption (FDE) to protect data at rest, and to also reduce the TCO and improve the ROI for storage systems when it comes time to dispose of hundreds and thousands of disks,” he said. “Depending on how it’s deployed, FDE has the potential to shave not just hours, but days and weeks off of the time or cost associated with running secure erase at the end of a product’s useful life. That’s something that organizations should be doing.”


October 3, 2012  10:49 AM

Riverbed’s enhanced remote-office appliances include vSphere support

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Riverbed Technology today announced that the Steelhead EX remote-office WAN acceleration appliances now come with VMware vSphere 5 support, while introducing a new version of its Riverbed Optimization System (RiOS) operating system. The company also introduced two new Steelhead core WAN acceleration appliances.

The Steelhead EX devices work in conjunction with the Granite remote block-based storage appliances Riverbed launched in February. Granite centralizes the local storage in the Steelhead devices. Files can be written directly to Granite, which does asynchronous writes to the storage array in the data center.

“Granite presents data center storage locally, so you can scale up and scale down the amount of storage from the data center,” said Miles Kelly, senior director of product marketing at Riverbed.

The Steelhead EX’s integration with VMware vSphere hypervisor lets administrators use vCenter to centrally manage virtual machines. Previous EX models were integrated with VMware Server, which didn’t include vCenter for central management of VMs. Kelly said up to five virtual machines can be managed on an EX appliance.

Riverbed also added new Steelhead CX pure WAN acceleration devices. The Steelhead CX5055 and CX7055 appliances have more TCP connections than the 5050 and 7050 models that they will replace. The 5055 and 7055 series contain solid-state drives (SSDs) for better performance. Previously, only the 7050 contained SSDs.

RiOS 8.0 has been upgraded to automatically recognize and control more than 600 applications, and has a new quality of service capability to prioritize PCoIP traffic, which is the protocol used for VMware View.


September 27, 2012  7:47 AM

Don’t let your data center turn into a storage museum

Randy Kerns Randy Kerns Profile: Randy Kerns

Did you ever have a visitor to your data center say, “I didn’t know any of these systems were still around?” The implication here is that your data center is one step away from being a museum. Having “museum worthy” systems is not a badge of honor. It means that the systems in the data center are probably not delivering optimal value.

For a storage system, this can be especially bad. Disk systems are typically in use no longer than five years. There’s good reason for these systems to have a limited lifespan. They are electro-mechanical devices that wear out when in constant usage.

Another reason not to let a disk storage system grow old is the continued advances in technology that allows more data to be stored in a smaller space with less power and cooling requirements in a new disk system. Every new disk generation, which changes on about an 18-month cycle, adds to the storage efficiency equation. New disk systems increase performance and often add new capabilities that can be exploited for improved operations. For instance, newer features in current systems include support of APIs for server virtualization hypervisors.

But, some storage systems may still be in use even though there are more efficient systems available. Reasons for storing information on these include:

• They may be used as secondary storage for less critical data.
• The costs may be minimized by not having full maintenance or support and IT has made a decision to take the risks.
• A legacy application may be running that has not yet been virtualized.

Sometimes the five-year lifespan might start just prior to a major technology shift such as the transition to systems that can incorporate solid state technology. In that case, a system that appears to be an artifact because it does not support the latest technology or features really is not old and may have years left before the asset is depreciated. This may be a good candidate to turn into secondary storage.

Maybe there are good reasons why some data centers look like museums. For storage, however, not keeping up with technology can impact in other areas. The older system may lack support for new server virtualization features while consuming more physical space and power and lacking performance requirements for demanding applications.

So take a long look at the museum quality of the storage in your data center. It can be a major indicator of inefficiency … and of optimization opportunities.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


September 25, 2012  12:43 PM

Red Hat CEO calls storage interest red hot

Dave Raffo Dave Raffo Profile: Dave Raffo

Red Hat’s storage platform is still mostly in the testing stage, but CEO Jim Whitehurst said the company took its first six-figure order last quarter and sees a bright future for its software-based clustered NAS.

Whitehurst talked up storage throughout Red Hat’s earnings call Monday, claiming significant interest in Red Hat Storage Server. He doesn’t expect significant revenue until next year but said many customers are running proofs of concepts for storage. He also identified storage as an area where the Linux vendor will heavily invest in, with plans to integrate storage with Red Hat Enterprise Virtualization (RHEV) server virtualization platform.

“We’re excited about the potential to disrupt traditional market plays for big data,” Whitehurst said.

He sees Red Hat storage as a low-cost alternative to hardware-based clustered NAS, such as EMC’s Isilon. Red Hat’s storage technology – acquired from startup Gluster for $136 million last year – can’t match Isilon features, but Whitehurst expects it to be enough for many customers.

“The storage space obviously has some well-established, well-regarded vendors,” he said. “But it also looks a lot like Linux did a decade ago with relatively inexpensive solutions. Not all your unstructured data really needs to fly first class.”

Red Hat has already embraced cloud storage with an appliance for Amazon Web Services (AWS) with plans to expand to other cloud providers.

“Not only do we have a significant cost advantage by being software-based,” Whitehurst said, “we also offer huge amounts of flexibility so you can burst up on the cloud and move your data [to the cloud].”

With clustered NAS, storage integrated with virtual servers and cloud storage, Red Hat certainly bears watching as a storage vendor.


September 24, 2012  7:35 AM

SimpliVity receives $25M to push its converged OmniCube

Dave Raffo Dave Raffo Profile: Dave Raffo

SimpliVity closed a $25 million funding round today, giving the startup ammunition to market its OmniCube converged storage stack due to ship later this year.

SimpliVity came out of stealth in August when it started its beta program for OmniCube, which has storage, compute and virtualization in one box. CEO and founder Doron Kempel said he expects the company to grow from 60 people to around 80 by the end of the year, and the new funding “gives us cash to fuel everything we want to do in 2013 in sales, marketing and engineering.”

One of the things Kempel wants to do is convince people that SimpliVity is unique among converged storage systems. He positions it as primary storage that can do just about everything, replacing the need for discrete devices for deduplication, backup, WAN optimization and cloud connectivity.

“We have defined the new IT building block,” he said. “It’s an accelerated software stack that runs on commodity hardware and one person manages it.”

SimpliVity is among a small group of vendors – Nutanix and Scale Computing are others – using the term “hyper-converged” to describe their systems. Kempel said he is trying to differentiate OmniCube from converged stacks sold by established vendors that combine a group of products that were originally created by different companies or different groups inside of a company.

“Convergence is a nebulous term,” he said. “Everybody and their husband says, ‘We’re converged too.’ We want to establish metrics for framing the convergence market. Not all cars are created equal — there are sports cars, trucks, hybrids. It’s the same with converged systems.”

SimpliVity’s B funding round brings its total to $43 million. Kleiner Perkins Caufield & Byers (KPCB) led the round, and original investors Accel Partners and Charles River Ventures also participated.

Kempel sold his last company, backup dedupe vendor Diligent Technologies, to IBM in 2008. He founded Diligent with Moshe Yanai, who led development of EMC’s Symmetrix platform and founded XIV before selling that systems startup to IBM.

When asked if Yanai was involved with SimpliVity, Kempel laughed and said, “I’m not allowed to talk about people I can’t mention.”

Yanai left IBM in 2010 but may be restricted from working with other storage companies.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: