Storage Soup


October 19, 2012  4:00 PM

SNW notebook: Fujitsu, Avere strike up a match

Dave Raffo Dave Raffo Profile: Dave Raffo

SANTA CLARA, Calif. – News and notes from this week’s Fall Storage Networking World (SNW):

Avere and Fujitsu America have forged a “meet in the channel” partnership matching Avere’s NAS acceleration device with Fujitsu storage arrays.

The vendors and their channel partners are bundling a two-node Avere FXT 3100 Edge filer cluster with Fujitsu Core filer with UDS NAS controllers and Eternus DX80 S2 Disk Storage System. Avere and Fujitsu call it the “100/100/100” bundle because it provides 100 TB of capacity and 100,000 IOPS for $100,000. Larger bundles are available, up to 2 PB and 2.5 million IOPS.

Avere CEO Ron Bianchini said the idea for the bundles came about because Avere and Fujitsu had common media and entertainment customers using their products. “We’ve been meeting often in customer sites,” he said. “They [Fujitsu] do data management well, and we do off-load well.” …

Former LeftHand Networks CEO Bill Chambers has taken over the CEO role at Starboard Storage. Chambers was LeftHand’s CEO when Hewlett-Packard bought the iSCSI vendor for $360 million in 2008. He joined Starboard as executive chairman shortly before the vendor came out of stealth earlier this year as a re-launched version of Reldata. He replaces Victor Walker as CEO. Walker had been CEO of Reldata since early 2011 and stayed on through the re-launch. Starboard hasn’t announced the CEO change, but Chambers is listed as CEO on the company website. …

Sepaton began shipping its S2100-E3 virtual tape library (VTL) with Hitachi Data Systems HUS 100 storage on the backup and its latest software version.

The system can scale to 2 PB and the new software supports DBeXstream technology that speeds deduplication of multistreamed and multiplexed enterprise databases. Sepaton has used HDS storage in its VTLs since 2010, but the HUS platform hit the market in August.

Pricing for the S2100-ES3 Series starts at $335,000, and S2100-ES2 customers can add new HUS 110 storage to their libraries. …

Imation has kept busy this year integrating data security acquisitions into its disk and removable drive storage, establishing its CyperSafe brand of encryption, identity and authentication, and key management capabilities.

Next up is moving the data deduplication it acquired from Nine Technology last December into its backup products. Brian Findlay, executive director of Imation’s storage product management, said the vendor will integrate dedupe into its DataGuard appliances that use hard drive and RDX removable storage. Imation is also working on an integrated storage appliance using Nine backup technology.

The 2013 roadmap also includes a private cloud backup offering that Imation will either host or sell software to server providers to host. Imation now supports public clouds through cloud seeding and replication between sites.

“The cloud is coming,” Findlay said. “SMBs are still comfortable with onsite backup. It’s one thing to get your data up there, but another thing to restore. But you can move a lot of data to the cloud with RDX.”

October 18, 2012  8:09 AM

When organizational issues inhibit IT progress

Randy Kerns Randy Kerns Profile: Randy Kerns

Information technology (IT) must continue to adapt and change as new demands arise and new technology is introduced. The new demands include more capacity for storing information as well as changes in procedures such as security and compliance.

The introduction of new technology presents the opportunity to obtain greater value from IT investments. Deploying server virtualization technology and increasing the number of servers virtualized has brought economic value and IT agility. New technology is a competitive issue, helping businesses handle information more effectively and faster.

Still, many IT operations take longer than they should to introduce and embrace new technology. So what is holding back companies from taking an obvious advantage? Why does it take a major reboot of IT to make changes for some organizations? Looking at many IT operations, there are common reasons that delay seizing the opportunities.

The most common reason is that the organizational structure for IT inhibits transformational changes. The structure creates a natural resistance to change for several reasons:

• There are many people involved in direction setting and approvals. Some may be other business unit owners or related organizations.

• Stakeholders brought in to participate in decision processes need to be informed and educated on technology and requirement changes.

• With more people, the parochialism can result in new demands that disrupt any efficient process.

To illustrate this problem, I will go through one of many examples that I’ve dealt with recently. In this case, the IT organization had been compartmentalized over time after individuals were promoted and functions separated.

The result was a number of IT directors that had equal authority and covered areas of specialization in IT. Other IT directors were given responsibilities to be the advocates for specific business units, again with equal weight. These directors could negate any change in IT that they did not agree with, and the CIO could not force change without consensus.

This means that all substantive decisions would require the cooperation or endorsement of all internal IT directors and the business units represented by the other IT directors. Education for a technology change required large group meetings, which were hard to schedule because of limited availability of the parties.

Compounding the problem, various vendors called on the individual IT directors and created internal competition and confusion. That caused delays in needed changes and frustration that it took more work to educate and convince others than to do actual implementation. The IT organization kept falling behind in technology and other advances. It was perceived as having archaic operations. Ultimately, an examination of outsourcing was seen as a means to implement change.

Business structures for IT need to match the requirements and pace of change for IT. They must allow for change as a natural process for competitive improvement. The decision-making process must be effective and timely and not mired in the inclusiveness of every possible person. The structure must include strategic planning as part of an organizational process. The process should include technology evaluation, education, and understanding industry best practices. Without a structure that matches the change rate required with IT, IT will periodically have to do a major reset.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 16, 2012  4:40 PM

Microsoft strengthens cloud play with StorSimple acquisition

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Microsoft Corp. today announced it is acquiring StorSimple, a cloud integrated storage (CIS) provider that uses its appliances to consolidate primary storage, archiving, backup and disaster recovery into the cloud. The terms of the deal were not disclosed.

The cloud appliance company has been at the forefront of designing its technology so companies can converge on-premise primary storage, backup and archiving to the cloud. Its appliances provide full primary storage capabilities, with up to 100 TB of on-premise storage capacity for enterprise applications while pushing data into the cloud.  StorSimple’s software 2.0 version, which does automatic tiering on solid-state drives (SSDs), SAS and the cloud, has a volume prioritization feature for moving data between local and cloud tiers.

“This tells me Microsoft is serious about getting into primary storage,” said Arun Taneja, founder, president and consulting analyst for the Taneja Group. “They can use StorSimple as an on-ramp to their (Azure) cloud, but they don’t need StorSimple for that. StorSimple goes way beyond an on-ramp. Amazon built their own gateway for their cloud, so Microsoft must have more in mind for StorSimple.”

Mike Schutz, Microsoft’s general manager of the server and tools business division, would not comment on whether the Santa Clara, CA.-based StorSimple will be folded into Microsoft. He also declined to discuss any other specific plans for its new acquisition.

“We just signed an agreement. The deal is not done (and) we will share more details after we close,” he said. “(But) StorSimple’s solution and technology is tightly aligned with our strategy of what we call Cloud OS. It’s a hybrid cloud focus. This is a perfect match for our cloud strategy.”

StorSimple’s systems are optimized for Microsoft applications such as Exchange and SharePoint, user files and virtual appliances. It uses Microsoft Volume Shadow Copy Service (VSS) to take snapshots of Microsoft applications and the Windows file system for backups. It also is certified with VMware.

“StorSimple started from the ground up doing Microsoft applications,” said Steve Duplessie, founder and senior analyst of Enterprise Strategy Group (ESG). “It was really specific around Microsoft, Microsoft, Microsoft for applications. This is not about Microsoft trying to be a storage company. It’s trying to be a cloud-enabled company.”

StorSimple also has a number of cloud provider partnerships, including Microsoft Azure, Amazon Web Services, Rackspace, EMC Atmos and Nirvanix. But Microsoft’s Schutz said there are “no plans to change the current partners StorSimple has today.”


October 16, 2012  7:02 AM

Gridstore adds $12.5M to funding grid

Dave Raffo Dave Raffo Profile: Dave Raffo

Startup Gridstore today closed a $12.5 funding round to build out its sales channel and accelerate development of its scale-out NAS system.

Gridstore uses virtual controllers that install on client devices and spreads capacity among 1TB or 2 TB nodes. Customers scale by adding virtual controllers and nodes to the grid. Gridstore stripes data across the nodes for fault tolerance, so customers can replace failed nodes by attaching new nodes and the storage pool can survive the loss of multiple nodes.

Gridstore CEO Kelly Murphy said the vendor’s goal is to “turn storage into a simple set of building blocks that you can add on to, and pay as you go.”

Murphy said Gridstore has about 40 customers, about half of those in education and another quarter of them service providers. He said the startup is ready to build out its channel and improve its visibility. Geoff Barrall, who founded high-end NAS vendor BlueArc and consumer/SBM file storage startup Drobo, joined Gridstore as chairman earlier this year.

He said the funding will also be used to drive further product development, with the addition of solid-state drives (SSDs) among its roadmap items. “That will be an excellent fit in time,” he said. “You can look for some things early next year.”

Gridstore originally started in the SMB market, and has also moved up to small enterprises. Its main competitors are lower-end NAS systems from EMC and NetApp, although Murphy said his company rarely competes with EMC’s Isilon enterprise clustered NAS.

GGV Capital led the Series A funding round with Onset Ventures participating.


October 15, 2012  7:41 AM

Amplidata adds denser, faster object storage nodes

Dave Raffo Dave Raffo Profile: Dave Raffo

Fresh off of a CEO change and funding round, object storage vendor Amplidata today added a larger capacity storage node and an operating system upgrade that supports 16 TB object sizes.

The AmpliStor AS36 is Amplidata’s densest, highest-capacity node. It holds 12 3 TB drives – up from 10 on the AS30 – for 36 TB per node and can scale to 1.4 PB in a rack. Amplidata also gave the AS36 a performance boost over its predecessors through addition of the Intel E3 processor and the option to add a 240 GB multi-level cell (MLC) Intel SSD to the storage node. Amplidata previously used SSD in its controllers but not in the storage nodes.

Paul Speciale, Amplidata’s VP of products, said the SSDs are included for routing small files. He said the Sandy Bridge CPUs result in a 40% speed increase over the AS30 because they can sustain full line-rate performance to each node.

The biggest improvement in AmpliStor 3.0 software is the ability to support larger files. The previous version supported 500 GB files, but 3.0 is enhanced for big file customers. Future versions will likely support even larger objects than 16 TB, but Amplidata has to make sure the larger files work with its erasure coding.

“We think our architecture can go higher as far as object sizes, but we have to put it into the test cycle,” Speciale said. “We also have to be able to repair these drives in a reasonable amount of time.”

AmpliStor 3.0 also can rebalance storage on nodes automatically after adding capacity. Previous versions allowed customers to add storage on the fly, but did not automatically rebalance.

Last month Amplidata named former Intel executive and Atempo CEO Mike Wall as its new chief, replacing founder Wim De Wispelaere. De Wispelaere remains with the company as chief technology officer.

Amplidata also received $6 million in funding from backup and archiving vendor Quantum at the time, bringing its total funding to $20 million. Quantum has an OEM deal with Amplidata to sell AmpliStor technology under the Quantum StorNext archiving brand.

AmpliStor products are used in cloud storage as well as for archiving. Speciale said he expects the Quantum deal to drive AmpliStor more into media/entertainment, genomics and government markets where StorNext has most traction.


October 12, 2012  10:29 AM

Astute takes early lead in VDI benchmark scores

Dave Raffo Dave Raffo Profile: Dave Raffo

Astute Networks this month became the second vendor to publish VDI-IOmark benchmarking numbers, and the vendor promptly proclaimed itself the lowest-cost-per-virtual-desktop-storage option in the industry.

VDI-IOmark was developed by the Evaluator Group analyst firm to test storage systems performance running virtual desktop infrastructure (VDI) workloads. The benchmark replicates a storage workload running multiple VMware View VDI instances, and measures the number of VDI users the system supports.

Astute’s ViSX  VM storage appliance supported 400 standard users with a configuration priced at $30,600, which comes to $76.50 per user. The benchmark results showed that a 2U ViSX can provision 140,000 sustained random IOPS that can be shared by all VMs on all hosts over an Ethernet network. Astute ran the benchmark on a ViSX appliance with 2.1 TB of usable capacity, all solid-state drives (SSDs).

Evaluator Group senior partner Russ Fellows said other vendors – fewer than 10 – have run the benchmark but have not made their results public. Hitachi Data Systems published the first set of numbers in January for its BlueArc Mercury 110 NAS array, which came out to $146.19 per user (1,536 users for a $224,546 system). None of the unpublished benchmark results matched Astute’s price per desktop, which is the main reason they remain unpublished.

“Some of the big vendors are afraid,” Fellows said. “They don’t want to publish unless they’re the best.”

Fellows said some benchmarked systems are all flash, while others are hybrids using SSDs and hard drives. He said Astute balanced the performance of flash with a lower price than many other all-flash systems.

“Being all-flash helps a lot,” he said. “Some flash in high-end systems is incredible pricey. Not only does Astute have flash, but it has a competitive price.”

Len Rosenthal, Astute’s senior VP of marketing, said the TCP protocol accelerator chip Astute uses for its iSCSI appliance also plays a big part in performance. “The big difference we have is our Data Pump engine,” he said. “That’s our accelerator for protocol processing, and it allows us to drive up performance in a way that no other storage can. We have dedicated offload technology.”

Rosenthal said he’s confident that no other vendor can beat Astute’s performance at its price. “We wanted to put a stake in the ground,” he said. “If others want to shoot at us, that’s fine. Dollars per VDI is the best thing about our system. Others can throw a $300,000 system at us and beat our system, but [ViSX] is a $30,000 system.”

Astute has set the bar, but the benchmark numbers will become more valuable when we have many more systems to compare. Some vendors and users run the long-standing Iometer benchmark for VDI, but Fellows said that benchmark is useless for VDI workloads.

“It’s a completely fabricated benchmark for VDI,” he said. “Iometer does not produce a workload anything like a VDI user would. It’s not realistic, and it’s misleading to users. There are only a few tools that can generate VDI workloads.”

Other VDI benchmarks include VMware View Planner, Citrix Desktop Transformation Accelerator and Login VSI.


October 10, 2012  9:54 AM

NetApp, Cisco steer reference architectures into ‘express’ lane

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and Cisco are expanding their FlexPod reference architecture concept to SMBs with the introduction of ExpressPod.

The best way to think of ExpressPod is as FlexPod’s little brother. FlexPod, which has been on the market for just under two years, uses enterprise storage from NetApp and servers and switching from Cisco. ExpressPod includes NetApp’s FAS2000 SMB storage and Cisco low-end Unified Computing System (UCS) servers and Nexus switches.

The first two ExpressPod architectures come in small and medium sizes. Both include Cisco UCS C220 M3 servers and Cisco Nexus 3048 switches. The small version uses NetApp FAS2220 storage with 32 server cores and the medium includes NetApp FAS2240 arrays and 64 server cores. Like FlexPods, ExpressPods are pre-validated by NetApp and Cisco and include an implementation guide. The reference architectures are sold by NetApp channel partners.

Adam Fore, NetApp director of solutions marketing, said ExpressPod architectures are designed for companies with fewer than 500 employees. ExpressPods are tested with VMware virtualization software, but Fore said the configurations also support Microsoft Hyper-V and other hypervisors.

NetApp and Cisco cite ease of use and lower cost as drivers for implementing ExpressPod, but they won’t give pricing information. They refer all pricing questions to their channel partners.

NetApp is taking a different reference architecture strategy on the lower end than its main rival EMC. While Cisco is the preferred server partner for EMC’s Vspex reference architecture on the high end, it will push Lenovo channel partners to build Vspex architectures with Lenovo servers at the SMB level.

NetApp also added clustering capabilities from its latest Data Ontap operating system (8.1.1) to FlexPods, allowing them to scale to 24 nodes. And NetApp and Cisco have added a validated FlexPod design for customers running Oracle RAC databases with VMware vSphere and vCenter.

NetApp and Cisco claim they have 1,300 FlexPod customers – up from 175 a year ago.


October 8, 2012  3:14 PM

HP debuts news Ibrix scale-out NAS system

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Hewlett-Packard Co. recently announced a new enterprise scale-out network attached storage (NAS) system – the HP Ibrix X9730 – that scales  to 1.68 PB of capacity in a single system and 16 PB in a single namespace.

The storage system, which replaces the HP Ibrix X9720 model, does typical NAS functions but it is also designed for high-volume, long-term active archiving for unstructured data. The array is three times faster on writes and five times faster on reads compared to the 9720, said Patrick Osborne, a director of product management at HP’s storage division.

“This system is bigger and denser,” Osborne said. “You can deploy a 1.7-petabyte cluster in about two hours from the time you power it on. The system is meant for tier three or four archiving. We are not selling it for high-performance, parallel computing. The software in the system if more for longer data storage but you can use it as an unstructured data repository. It’s a NAS system at the end of the day.”

The HP Ibrix X9730 is 5U and scales up to 16 file server nodes and eight capacity blocks, with each block containing 70 drives. The system now supports 3 TB and 2 TB midline SAS drives, as well as CIFS, NFS, HTTP, HTTP/S, WebDAV, FTP, FTP/S and NDMP protocols. A two-node 210 TB configuration is priced at $223,589 or $1 a Gigabyte, Osborne said.

In comparison, the 9720 scaled up to 1.2 PB. That product is designated as end-of-life but will be supported for five years.

Like the 9720, the 9730 system is targeted for media, entertainment and content depository. It supports archive applications such as Symantec Enterprise Vault and CommVault Simpana. The 9730 comes with the HP Ibrix Constant Validation Software that generates check sums to determine data is not corrupted.

It also comes with a data mobility feature for tiering data in the same namespace based on data access and file type. A WORM data retention capability marks files as retained. HP’s Ibrix operating system software v6.1 streamlines and simplifies the Ibrix storage system deployment so it can be implement in a shorter time. The system is based on a pay-as-you-grow architecture, reducing the chance of over-provisioning.

 


October 8, 2012  7:50 AM

Estimating required storage capacity proves tricky

Randy Kerns Randy Kerns Profile: Randy Kerns

Buying storage is an ongoing process that may be periodic in some environments and seemingly continuous in others. There are several primary reasons given for storage purchases:

  • New application deployments that require storing significant amounts of data.
  • Performance or features are required on storage systems to optimize environments such as server virtualization.
  • A technology transition is necessary to replace systems that have reached the end of their economic life.
  • Additional capacity is required to handle the demand to store more information.

A common decision point in purchasing storage for all of these reasons is how much capacity to buy. There may be different types or classes of storage that segment the purchase, but the question of how much remains.  Finding the answer to this is more complicated than it seems.  It starts with evaluating the requirements.

In working with many IT operations on capacity planning, I’ve seen quite a variety of approaches to coming up with the amount of capacity involved in storage purchases.  The different methods range from elaborate capacity planning models to taking what is asked for by application and business owners and multiplying by 10.

One reason a multiplier is used for deciding on the amount of storage to purchase is that capacity demands continue to increase faster than expected, and failure to meet storage demand immediately has negative consequences.  Another reason is the budgetary issues within companies. There is a feeling that it may be more difficult to get  funding for the purchase in the future. This means IT buyers are not sure when they will get another chance to purchase storage because of a potential “freeze.”

The information typically provided for the amount of storage required may prove inaccurate. An example of this comes from the deployment of a new application that stores information in a database. The systems analyst may have determined that the capacity for the database may ultimately be 20 TB.  The database administrator will request 100 TB, allowing for extra capacity needed for testing and for a buffer in case the systems analyst has underestimated the needs.  The storage admin may double that request for primary capacity to 200 TB and then add another 200 TB for backup to disk target. Now the purchase for a 20 TB primary need has expanded to 10 times that for primary and an equal amount for data protection.

It is becoming rarer for organizations to upgrade existing storage systems. There are several reasons for this, but the main reasons are that organizations don’t want to extend old technology or their economic models for depreciation makes it easier to move to new systems.

In general, storage capacity always gets used – for one reason or another.  Managing it effectively requires effort and discipline. Unfortunately it is not the most efficient process, given the tools required and the time commitment necessary.  And no real revolutionary change to improve the situation seems be in the adoption phase.

The goal for companies is to never run out of required storage capacity.  Mostly, the prediction of how much to acquire is based on hard-won experience.  The best practice is always to purchase storage proactively and not when desperate for capacity.  The other guiding fact in purchasing storage is to keep up with technology changes and make transitions to take advantage of the new developments.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

 


October 3, 2012  2:46 PM

IBM delivers new enterprise array powered by Power7

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM today upgraded its flagship storage system, the DS8870 high-end enterprise array. The big additions are the use of 16-core Power7 controllers and support for 1 TB of usable system cache, which IBM claims gives the DS8870 three times the performance of the DS8800 it replaces.

Ed Walsh, VP of marketing and strategy for IBM storage, said the enhancements allow the DS8870 to perform analytics faster and represent more than a step upgrade from the DS8800.

“The platform is brand new,” Walsh said. “I think it should be called the DS9000. We can do operational analytics faster. This allows us to be predictive.”

Outside of the processor, however, the DS8870 has a lot in common with the DS8800. They support the same number of drives and capacity (2.3 PB), sixteen 8 Gbps Fibre Channel or FICON ports and from two to 16 host adapters. Other improvements include the DS8870 ships with all Full Disk Encryption (FDE) drives (customers can turn off encryption if they don’t want it) and greater VMware vStorage API for Array Integration (VAAI) support than the DS8800.

The performance boost will have to be impressive for IBM to compete with EMC’s VMAX and Hitachi Data Systems Virtual Storage Platform (VSP), considering that IBM has been dropping storage market share to those vendors.

IBM submitted benchmarks to the Storage Performance Council (SPC) to validate its performance claims. The DS8870’s SPC-2 score of 15,423.66 MBps is the highest for that benchmark, which measures aggregate data rate for large file processing, large database query and video on demand workloads.

For maximum I/O request throughput, the IBM DS8870’s SPC-1 score was 451,082.27 IOPS, the top score for an enterprise array without any solid-state drives (SSDs). The DS8870 does support SSDs, but none were used in the benchmark testing.

EMC doesn’t submit benchmarks for VMAX. The HDS VSP had an SPC-2 score of 13,147.87 last month and an SPC-1 score of 269,506.69 last November. However, HPS had better price/performance scores on both benchmarks because the systems it used cost roughly half of IBM’s for both tests.

Analyst Greg Schulz of StorageIO Group said the performance boost is expected of a new system but said the FDE support can also prove valuable.

“What catches my eye in addition to the usual performance and capacity improvements are the standard support for full disk encryption (FDE) to protect data at rest, and to also reduce the TCO and improve the ROI for storage systems when it comes time to dispose of hundreds and thousands of disks,” he said. “Depending on how it’s deployed, FDE has the potential to shave not just hours, but days and weeks off of the time or cost associated with running secure erase at the end of a product’s useful life. That’s something that organizations should be doing.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: