Storage Soup

A SearchStorage.com blog.


July 19, 2011  2:24 PM

Technical innovation and the Department of Revenue Prevention



Posted by: Randy Kerns
storage vendors

All new storage technology doesn’t come from startups, although you might get that impression by reading about industry acquisitions.

The reasons most often listed for acquisition of a start-up company are:
• Technical infusion (technology acquisition)
• Expansion into a new business area (new technology and staff)
• Complementary solutions (filling in a product line hole).

These reasons would lead to the conclusion that startups bring new technology to customers more effectively than large established companies. Considering that popular technologies such as data deduplication, thin provisioning and iSCSI storage were originally brought to market by startups, there is merit to this line of thinking. But it is not such a simple issue. Large vendors do have brilliant and dedicated people, but developing and bringing a product to market in these companies can be a complicated process. That’s because they create a corporate structure that often makes it difficult to take a new idea or approach and bring it to reality.

Large companies have processes that their people are required to follow, making it difficult to innovate. Any initiative or idea must conform to their interpretation of the company process, and there are organizations and people inside each company that can create enough resistance to hinder realization of the new ideas. I call these people and processes the Department of Revenue Prevention.

If a large company has an entrenched Department of Revenue Prevention, it is easier for people with ideas to take them through the startup route. That route has less resistance, and innovators’ time and efforts are not spent battling the department but actually moving the innovation to market. Unfortunately, the rewards may be limited based on what must be given up to get the funding necessary to take the technology innovation to a product stage. Ultimately a startup may not be successful for a variety of reasons, including:

• A bad board assigned by investors that do not understand the market, the technology, or what is required to bring the technology to fruition
• Missing or subpar key people in areas such as strategy, marketing, and sales.
• Technology that may not meet customer needs at the right time -– either being too early or too late.

Large companies that understand how to nurture and develop the ideas of their talented people will be more successful than those that succumb to the bureaucratic sprawl and paralyzing Department of Revenue Prevention structure. Even inside these great companies, things change over time and bureaucracy spreads. To re-invigorate a company requires periodic review and change to enable innovation. It’s either that, or continue to acquire other people’s ideas.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

July 15, 2011  2:38 PM

IBM unveils next-gen XIV



Posted by: Dave Raffo
EMC VMXe, enterprise storage system, IBM XIV

While EMC formally launched its VMAXe enterprise storage system to compete with IBM’s XIV (as well as Hewlett-Packard’s 3PAR) this week, IBM was giving XIV an overhaul.

IBM launched what it calls XIV Gen 3 with InfiniBand connectivity between modules, 2.4 Ghz quad core Nehalem CPUs, 2 TB native SAS disk, and 8 Gbps Fibre Channel support. By next year, IBM also expects to offer up to 500 GB of solid-state drive (SSD) capacity per module for a total of 7.5 TB in a fully configured 15-module configuration. According to senior management consultant for IBM system storage Tony Pearson’s Inside System Storage blog, XIV will use SSDs as DRAM cache similarly to NetApp’s Performance Accelerator Modules (PAM) – a product IBM resells as its N Series.

None of the XIV enhancements are ground-breaking, but IBM claims to get a two- to four-times boost over Gen 2 for workloads such as transaction processing, sequential reads and writes, and file and print services, and applications such as Microsoft Exchange and Hyper-V, Oracle Data Warehouse, and SAS Analytics Reports.

IBM will keep XIV Gen 2 around for at least a year for customers who don’t need the new system’s performance or capacity (Gen 2 uses 1 TB drives).

In case you’re wondering, Gen 2 was the first version of the product that IBM launched in Sept. 2008 after acquiring XIV the previous January. Gen 2 had different disks, controllers, interconnects and software enhancements over the Gen 1 product that it bought from XIV.

While IBM characterized XIV as a Web 2.0 system when if first purchased it -– the same label EMC used to describe it during the VMAXe launch –- Pearson wrote that XIV is a full-blown enterprise system that competes with EMC’s high-end VMAX. “As if I haven’t said this enough times already, the IBM XIV is a Tier-1, high-end, enterprise-class disk storage system, optimized for use with mission critical workloads on Linux, UNIX and Windows operating systems, and is the ideal cost-effective replacement for EMC Symmetrix VMAX, HDS USP-V and VSP, and HP P9000 series disk systems,” Pearson wrote.

He did point out, though, that the DS8000 remains IBM’s platform for mainframe connectivity.
The XIV launch was low-key and took second fiddle to Big Blue’s IBM zEnterprise 114 Mainframe Server rollout this week, as Enterprise Strategy Group analyst Mark Peters pointed out on his The Business of Storage blog. Peters was generally impressed by the new XIV, though.

“The third generation of XIV is all about adding performance– and plenty of it,” Peters wrote.  “Besides more cache, more/faster ports, and a change to SAS drives, there’s also Infiniband connectivity within the XIV (helping, surprise surprise, with performance) and ‘spare’ CPU and DRAM slots for ‘future software enhancements’ … IBM is keen to point out that the SSD is transparent caching, with no tiers per se to manage. Of course, it would be, since XIV has always proclaimed there’s no need to tier. But, pragmatically, as a user I’d only worry if it economically makes the system better and still does it without me needing to manage things. Assuming so, then I’ll give it a thumbs up and leave the semantic debate to others.”


July 14, 2011  1:02 PM

Nimble adds storage system, grabs $25M in funding



Posted by: Dave Raffo
CS210, iSCSI, nimble storage, storage system

Nimble Storage today added a smaller model of its combination primary storage/backup platform and $25 million in fresh funding.

Nimble launched the CS210, a year after it came out of stealth with CS220 and CS240 systems that combine iSCSI, integrated inline compression and replication to optimize and protect data, and flash to accelerate performance. The startup also said Artis Capital Management has led its fourth funding round, bringing its total funding to $58 million.

The CS210 is an entry level version of the Nimble platform. The CS210 has 8 TB of usable capacity, and costs $38,000. Comparatively, the CS220 has 16 usable TB for about $58,000 and the CS240 holds 32 TB for about $88,000.

Besides capacity, the difference with the CS210 is that it doesn’t support 10-Gigabit Ethernet (GbE) out of the gate. The CS210 comes with four GbE ports, while the CS220 and CS240 have either six GbE or two 10 GbE ports. All the systems use 1 TB or 2 TB 7200 RPM SATA drives with up to 1 TB of multi-level cell (MLC) flash in 100 GB drives. The systems cache copies of hot data blocks in flash and send it off to the SATA disk.

Nimble marketing VP Dan Leary said the vendor has more than 100 customers in three full quarters of shipping systems. He said most customers use it for primary applications such as VMware and Microsoft Exchange and SQL while larger companies use it for test/dev and other specific applications.
Leary said he expects the CS210 to appeal to organizations “a little bigger than the classic SMB. We see it fitting at the low end of the mid-market for companies that want a primary system or a remote/branch office of a company that might have a CS220 or CS240 already. It’s also a good fit for customers replicating to a DR facility who have 90 days of snapshots at their main site but can get away with only 30 days in the DR site.”

Nimble’s latest funding closely follows the $16 million round it announced last December. Leary said the startup hasn’t burned through that previous funding haul, but benefitted from an attractive valuation from new investor Artis Capital Management. Artis was a major shareholder in Data Domain before EMC acquired the deduplication backup specialist.

Leary said Nimble will use the funding to expand sales to Europe and Asia. The company has 80 employees today and no sales team outside of North America.

Nimble switched CEOs in March, hiring former NetApp executive and Omneon CEO Suresh Vasudevan to replace founder Varun Mehta, who remains with Nimble as VP of Engineering and sits on the board.


July 13, 2011  1:19 PM

VMware gets deeper into storage with vSphere 5



Posted by: Dave Raffo
storage arrays, vaai, VMware, vs[phere 5

Storage played a big part in VMware’s vSphere 5 launch Tuesday, as the vendor introduced a new software product called vSphere Storage Appliance and made enhancements in the areas of storage management and provisioning, replication and disaster recovery in virtual environments.

“Storage plays a central part in what we’re doing [with vSphere 5],” VMware senior product marketing manager Mike Adams said. “A lot of it has to do with advancing the cloud, but we’re also trying to help people become more efficient with storage.”

vSphere Storage Appliance is for SMBs, and lets them turn internal disks into shared storage that is required to reap the benefits of vSphere. Customers load the software onto a server and can point it at one or two additional ESXi targets to create a storage pool – similar to products such as Hewlett-Packard’s Virtual Storage Appliance (VSA). The appliance will cost $5,995 as a standalone product and $7,995 as part of a bundle with vSphere 5.

The first version is limited to three servers. “This is for SMB customers who can’t afford or don’t have the know-how to set up a SAN,” Adams said. “They can use vMotion for live migration and VMware HA for failover of virtual machines. They both require shared storage.”

Other new storage features from the vSphere rollout included:

Storage Distributed Resource Scheduler (DRS). This extends the DRS feature from the compute side to storage, helping customers quickly provision virtual machines to storage pools. DRS takes advantage of new vStorage APIs for Storage Awareness (VASA) to place data and load balance based on I/O and available capacity.

Profile-Driven Storage. This lets users map VMs to storage levels based on service level, availability or performance needs. VMs running applications that need the highest performance can be mapped to tier one storage, with less critical apps mapped to lower tiers. Customers associate tiers with service levels for performance and available capacity.

vSphere Replication. Building replication into Site Recover Manager (SRM) 5 removes the need for array-based replication, allowing customers to replicate data between different types of storage systems. It also adds automated failback and planned migration between data center capabilities. And while VMware presenters didn’t talk much about it during the public launch, the vendor also rewrote the code for its VMware HA DR product.

VMware vStorage APIs for Array Integration (VAAI) support for thin provisioning, NAS hardware acceleration. vSphere will inform arrays when files are deleted or moved by Storage vMotion so the space can be reclaimed. It also monitors capacity on thin provisioned LUNS and warns users when they are running out of physical space to avoid oversubscription with thin provisioning. The new hardware acceleration for NAS includes a full file clone that enables the NAS device to clone virtual disks, similar to the VMware’s full copy feature for block arrays. It also has a thick virtual disk feature that lets administrators reserve space for an entire virtual disk. Previous versions of vSphere always created a virtual disk as a thin provisioned disk.

vSphere 5 also adds NFS support for its Storage I/O Control feature that prioritizes I/O of virtual machines in shared storage to reduce latency.


July 11, 2011  1:05 PM

How CIOs obtain information – observations from the field



Posted by: Randy Kerns
storage arrays, storage management, vaai

When talking to CIOs, IT directors and managers, I’m sometimes surprised by what they know – and don’t know – about industry developments. During an education session I held recently, the IT people told me they had not heard of VAAI, the VMware vStorage APIs for Array Integration. This surprised me, given the performance gains yielded by VAAI with storage systems that supported them.

I explained VAAI and the improvements from using vSphere 4.1 and storage systems with VAAI, and then inquired about why they had not heard about it.

Most said that they did not have time to research information themselves and looked to “trusted advisors” for that type of information. The trusted advisors could be a small set of salesmen or sales engineers or the people from well-known independent firms. If the information hadn’t been pushed to them in sessions such as the one I was conducting, they not might not hear about it.

Digging further into their reluctance to research information, there was a general feeling that much of the written information they received had so much hyperbole (note: their actual word was “BS”) that the facts and useful information were obscured.

This means that much of the vendor marketing “amplification” was actually a detriment instead of an effective method of relating the virtues of a product or company. The way IT directors and managers look at this information has big implications in both marketing and in the delivery of storage technology education.

Obviously this is just a personal observation and not a scientific study (as opposed to a marketing study that is intended to obtain the desired results). But it does indicate that any company that invests in delivering information on storage products and technologies should evaluate the effectiveness of its messaging. It also shows the trusted advisor remains the best means of communication.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 30, 2011  1:03 PM

Scale-out NAS purchasing considerations



Posted by: Randy Kerns
scale-out NAS

Scale-out NAS is becoming popular, with most major vendors offering these types of products. As a reminder, scale-out NAS systems will increase performance and capacity at the same time – although you don’t have to scale the systems in the same ratio. You can add controllers for performance, storage for capacity, or both.

There are several things to look at when considering a scale-out NAS system.

1. Is a single namespace provided across all the nodes (also called controllers or heads) so that a file system can be spread across the nodes but the user does not need to take any special action for accessing a file? There are different ways that a single namespace can be implemented, and some may be better than others. Mounting or sharing a file system on a scale-out NAS system should require no more effort than if it was on a single-node system.

2. Does the management software manage across all nodes as an aggregate but still allow individual node communication to detect problems in the system?

3. Is there load balancing across nodes? The load balancing can be automatic when files are stored to distribute data across the different nodes. Will data automatically be redistributed across nodes (in the background) for capacity or load balancing?

4. Can it scale independently? In other words, can you scale nodes for more performance and the underlying storage for more capacity? This provides the greatest flexibility in usage. If the answer is yes, then how many nodes can the system scale to include? And how much capacity (including storage controllers) can it scale to?

5. Is there a back channel for communication between nodes? This requires another communication path between nodes rather than using the same path clients may be using to access data. Examples of this may be an InfiniBand connection between nodes or a 10-Gigabit Ethernet connection. Usually there would be a pair of back channels for availability.

6. Are there any features that are not included that would normally be part of standard NAS systems? A few to consider are: snapshots, remote replication, NDMP support, NFS and native CIFS support, security controls such as Active Directory, LDAP and file locking for shared access between CIFS and NFS, anti-virus software support and quotas.

7. Does the scale-out NAS support both small and large files? Some of the distributed file systems used for scale-out NAS come from the high-performance computing area where the optimization was around large files. It is important to understand whether the system supports small files and large files equally.

This list is a first level look with more detailed differences to be explained in an upcoming Evaluator Group article.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 29, 2011  1:12 PM

Oracle buys Ellison’s storage company, Pillar



Posted by: Dave Raffo
pillar data, storage systems

Oracle CEO Larry Ellison today answered the question of what he will do with Pillar Data Systems, which he has invested hundreds of millions of dollars of his own money into. Oracle said it has agreed to acquire Pillar and will use its storage as its main SAN platform.

The Pillar deal is likely to be a big topic Thursday when Oracle executive president Mark Hurd and vice president of systems John Fowler host an Oracle storage strategy update that will be webcast.

In a letter to Oracle customers, Fowler referred to Pillar as a leading provider of SAN block I/O storage systems and highlighted its quality of service and scalable architecture. Fowler wrote that Pillar has nearly 600 customers running 1,500 systems, and boasted that the utilization rate of Pillar Axiom systems is about twice the industry average.

A presentation about the deal on Oracle’s website said Pillar Axiom will become one of four Oracle storage products dedicated to running Oracle software better. The others are Exadata Storage Servers for databases, ZFS Storage Appliances for NAS and StorageTek’s tape family.

Most of Oracle’s storage platform was acquired in the Sun deal, and Sun resold SAN storage from Hitachi Data Systems and LSI. After Oracle acquired Sun last year, Ellison said his company would concentrate on selling storage developed in-house rather than OEM products. It dropped its partnership with HDS for high-end SAN systems last year. It continued to sell midrange SAN systems from LSI, but during Oracle’s quarterly earnings call last week CFO Safra Catz said sales of the LSI systems dropped in the wake of NetApp acquiring LSI’s Engenio storage business. Pillar Axiom will likely replace NetApp Engenio systems as Oracle’s main SAN platform.

People in the storage industry have wondered about Pillar’s fate ever since Oracle bought Sun. Ellison’s venture firm sunk $150 million into Pillar to get it started in 2001 and put a lot more in to keep it running over the last 10 years. Pillar didn’t get products out the door until 2005, and it is unlikely that the company has ever run at a profit. An SEC filing by Oracle regarding the acquisition said Pillar owes Ellision “and his affiliates” about $544 million for loans and interest.

During the Oracle call last week, Ellison said most tech companies on the market now are priced too high to acquire.

“I think we’re able to grow through acquisitions when they’re attractively priced and they make sense,” he said. “They are by and large not attractively priced now and don’t make sense, so we’re not doing them. If these assets are wildly overpriced, we can’t make a good business case for buying them. Instead, we can focus our energies on organic growth.”

Oracle will pay nothing up front for PIllar but might have to pay Ellison and Pillar stockholders if Oracle makes a profit from the Pillar products over a three-year period from the date the deal closes.

In a blog today on Pillar’s website, Pillar CEO Mike Workman said he and PIllar president Nancy Holleran will join Oracle. “Pillar is now a critical component to the Oracle storage strategy,” Workman wrote.


June 28, 2011  1:45 PM

IDC: data’s rapidly increasing, staffing isn’t



Posted by: Dave Raffo
Cloud storage, data growth, data management

IDC today released the results of its annual EMC-sponsored Digital Universe study, which confirms what storage professionals see first-hand every day: data keeps growing unchecked and resources to manage it aren’t growing nearly as fast.

IDC forecasts that 1.8 zettabytes of data will be created and replicated this year – enough to fill 200 billion two-hour high-definition movies, 57.5 billion 32GB Apple iPads or the amount of storage required for 215 million high-resolution MRI scans per person per day.

In other words, a really lot of data, and it’s doubling every two years according to IDC’s numbers. And metadata is growing twice as fast as the digital universe.

Looking farther out, IDC forecasts that by 2020 IT departments will have 10 times as many virtual and physical servers, 50 times as much information, and 75 times the number of files or containers that encapsulate information than they do today.

And there will be 1.5 times the number of IT professionals to manage it all.

As you would expect, EMC global marketing CTO Chuck Hollis hit on the “big data” theme in discussing the results, but also suggested the findings could serve as a wakeup call to change the way people manage data.

“I would use this as evidence to go to senior management and say ‘We need a new game plan here,’” Hollis said. “Simply expanding five percent year-over-year on storage costs, taking the machines they have and tuning them up – that’s not going to keep up. I meet a lot of storage people who think they’re like the people with their fingers in the dikes, the water keeps coming and they’re running out of fingers and toes. Maybe it’s time to think about this problem differently.”

Hollis said “a lot of people are looking at this as an opportunity instead of a problem,” and those people are what EMC refers to as the “big data crowd.” They consist largely of media and entertainment companies and researchers who use data to make money for their employers.

“There are actually two kinds of IT organizations we see often in a big company,” Hollis said. “One is the traditional IT guys who deal with shared services, e-mail, Oracle and things like that. The big data crowd is usually a separate IT structure, usually researchers or business guys who have an idea and they handcraft the environment in such as way that makes the money or provides the value they want. The technology is different, the organization is different, and the thinking is different. At what point does this big data IT start looking like mainstream IT? Certainly not this year, but if this data growth keeps going, in three or four years it will be a lot more complex.”

IDC group vice president for storage Dave Reinsel said data growth is fueled partly by the low cost of disk. But he agrees with Hollis that organizations need to take a different look at how they deal with the data.

“We’ve made it dirt cheap to store,” he said. “If costs were going up like gasoline, people might change their behavior. But storage cost per gig is going down every year, so people have more. But data centers aren’t cheap to run. You have to justify building another data center. We’re getting to the point where we need to enable companies to extract the value out of that information.”

So far, Reinsel said, cloud storage isn’t playing much of a role in storing that information. Today, all cloud computing accounts for less than 2% of IT spending.

“Only 20% of information will be touching the public cloud by 2015,” Reinsel said. “People aren’t just jumping to public clouds. Hybrid clouds are out there and social networks are driving growth to public clouds, but there are still security concerns.”


June 27, 2011  3:00 PM

BlueArc files for IPO, again



Posted by: Dave Raffo
bluearc, clustered NAS, ipo

Following a year of large storage acquisitions, it looks like 2011 might be more IPO-friendly for storage vendors.

Two weeks after solid-state storage vendor Fusion-io went public, clustered NAS provider BlueArc Friday registered with the Securites and Exchange Commission (SEC) for a public offering. Nexsan already has an IPO filing on the books and SAN vendor Xiotech’s CEO Alan Atkinson said he is looking to follow Fusion-io’s lead and go public.

BlueArc has gone this far before. It filed for an IPO in 2007 but never followed through because of poor market conditions.

BlueArc, which benefits from an OEM deal with Hitachi Data Systems, has never had a profitable quarter and has lost a total of $230.3 million since it began shipping its storage systems in 2001. Its annual revenue was $74.2 million in 2008, $65.9 million in 2009 and $85.6 million last year, and it lost $19.6 million, $15.8 million and $9.4 million over those years.

In the three months that ended April 30, 2011, BlueArc had revenue of $24.7 million and lost $4.3 million.

The BlueArc filing said the vendor has more than 750 customers with more than 2,000 of its systems deployed.

SAN vendor HDS sells BlueArc’s SiliconFS file system with its storage arrays to give HDS platforms NAS capability. HDS accounted for 41% of BlueArc revenues last year and 45% of its revenues for the quarter ending April 30. BlueArc’s filing said its contract with HDS must be renewed every year. However, Judging from public statements HDS has made, it is happy with the BlueArc relationship.

BlueArc’s filing said it hoped to raise up to $100 million in the IPO. That’s small change compared to some of the storage transactions over the past 12 months. EMC acquired BlueArc competitor Isilon for $2.25 billion last year. Also over the past year, Hewlett-Packard bought 3PAR for $2.35 billion, Dell acquired Compellent for $820 million, and NetApp picked up LSI’s Engenio storage division for $480 million.


June 23, 2011  1:24 PM

Xiotech’s SSD strategy: beat Fusion-io



Posted by: Dave Raffo
fusion-io, hybridISE, SSD, xiotech

The people who run Xiotech are closely watching Fusion-io these days.

That’s because the rollout of its Hybrid ISE solid state storage system this month has increasingly brought Xiotech into competition with PCIe flash card vendor Fusion-io. Xiotech is also looking to go public eventually, and Fusion-io’s IPO this month raised $237 million.

Xiotech CEO Alan Atkinson said Xiotech will ship every unit of Hybrid ISE it could build this quarter, although he didn’t say how many units were built. “This will be the most successful product launch in Xiotech history, and the first of several products on our roadmap in quick succession that will work together,” he said.

Atkinson said part of the success of Hybrid ISE is due to the awareness of the flash market that Fusion-io created with its products and the attention its IPO created.

Xiotech takes a different approach to SSDs than most storage array vendors. Instead of using SSD as cache or plugging SSDs into traditional arrays, Xiotech puts a set amount of SSD capacity along with hard drives in its storage bricks. Each brick has 20 hard drives and 20 SSDs to provide 14.4 TB of usable capacity, and uses what Xiotech calls Continuous Adaptive Data Placement to move data between hard drives and multi-level cell (MLC) SSDs to optimize I/O performance.

Atkinson said Hybrid ISE is shipping mostly into new markets for Xiotech. The new customer base includes Fortune 500 firms, particularly financial services companies looking to accelerate database performance. “That’s not shocking,” Atkinson said. “That’s where Fusion-io is selling, and that’s where the SSD market seems to be.”

Like Fusion-io’s products, Hybrid ISE appeals more to the people who manage applications than traditional storage admins. Along with Oracle databases, SSDs in storage are a good fit for virtual desktop infrastructures (VDIs).

Atkinson said Xiotech’s advantage is that Hybrid ISE is easier to set up and manage than PCIe cards. “Fusion-io goes to the apps guys and says ‘We can make your stuff look really fast.’ And it’s true,” he said. “But the administration of that is pretty difficult. They have to take a small LUN, open the servers up, put a card in, and roll their own DR solution because there’s no built in replication that looks like disk. And they have 800 gig as a target. That means they have to re-architect things.”

The storage vendor landscape has been re-architected the past few years as the most successful smaller companies have been gobbled up by the big guys. 3PAR, Compellent, DataDomain, EqualLogic, and Isilon all started around the same time as Xiotech but Xiotech is still on its own while the others have been absorbed by Hewlett-Packard, Dell and EMC. And most of those deals have been for billions of dollars.

Atkinson, who sold software vendor WysDM to EMC in 2008 before joining Xiotech, said it’s good to be among the few smaller storage system companies left standing.

“For a private company, those types of acquisitions raise your profile,” he said. “It makes it easier for us to look at a public offering, which is the path we’re on. There’s a dearth of companies in that space and storage has demonstrated itself to be hot. There’s a real appetite in the [financial] community for storage companies.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: