Storage Soup


November 3, 2008  12:52 PM

ESX and SATA: So happy together?

Tskyers Tory Skyers Profile: Tskyers

I’m a big fan of SAS. I’ve professed my undying love and devotion to it (at least until solid-state disk becomes just a little more affordable). So why on earth would I be writing about putting VMware’s ESX Server on SATA disks?

I was poking around on the Internet a few weeks back and came across a deal I couldn’t possibly refuse: a refurbished dual Opteron server with 4 GB of RAM and four hard drive bays (one with a 400 GB drive) with caddies and a decent warranty for $229. No, that’s not a typo!

The downside is that the server is all SATA, and ESX won’t install on some generic SATA controllers attached to motherboards, or even some add-in SATA cards, without some serious cajoling and at least hunting around on VMware’s support forum. These hacks are not officially (or even unofficially) supported. In fact, VMFS is explicitly NOT supported on local SATA disk(scroll to the bottom of page 22). There are exceptions to this and there is work on a SATA FAQ where you can get more info.

When my server arrived, I installed an Areca SATA RAID card in it, downloaded the beta Areca driver for ESX 3.5 update 1 and went about the install. Areca is my favorite SATA/SAS add-in card manufacturer. Their cards are stable and have proven to be the fastest cards in their class for the money.

Why not use LSI, you may ask, especially considering that their driver is in the ESX kernel? Well, I like speed, and I usually don’t make compromises when it comes to the performance of a storage subsystem unless it is absolutely, positively required due to some other dependency. (As a side note, I have LSI  installed in my Vista x64 desktop due to a lack of x64 driver availability for my Areca card at the time of install. See my Windows on SAS blog post for more info.)

The Areca driver install went smoothly, and I currently have Exchange 2007, a Linux email server, a Windows 2003 domain controller, a Windows 2008 domain controller, a Windows XP desktop, and an OpenSolaris installation on the 1.2 TB of local SATA storage. So far, things look strong and stable with the driver in place. Normally, I try to stay away from needing a driver disk at install time to get to storage, as it makes recovery a nightmare if you are forced to use a live CD to repair your system. But this process was so stable that I may need to add exceptions to my prejudices. . . .

I’ve had acceptable performance (according to my subjective non-scientific benchmarking) with the relatively small number of VMs I have deployed on this server, so all in all, the storage subsystem is performing admirably.

This leads me to the question: Is SATA in ESX’s future? I wonder, seeing that Xen has native support for a myriad of storage subsystems, and Windows Hyper-V will install on SATA. I can’t see VMware being able to hold the line of local VMFS on SCSI/SAS only for much longer.

Couple this with storage Vmotion and it leads to more questions about the storage subsystems used for VMware and how it affects VMware’s already strained relationship with storage vendors. On the other side of that coin, why isn’t Microsoft or Citrix taking any heat for allowing the use of onboard SATA attached to cheap integrated controllers? Have the generic onboard storage controllers (Marvell and Silicon Image come to mind) reached the point where they are deemed capable of handling heavy server I/O loads? I’m sure they’ve advanced but I’m not convinced they can handle the storage I/O loads that a server with 20 VMs would generate.

But I have a bigger dilemma than all of this. How would I free the terabytes of storage I’d have trapped in these servers if I used eight 1 TB SATA drives? ESX isn’t exactly the platform of choice for a roll-your-own NAS. While it is technically possible for one to install an NFS server daemon on an ESX host, doing so may not be such a good idea. I’ve seen posts on how to compile and install the NFS module on an ESX host, but I haven’t seen any hard documentation as to the overall performance of the system when serving up VMFS via NFS and host/guest VMs simultaneously. You could also create a guest and turn that guest into an iSCSI or NFS target and point your ESX server to it, but that is also a bit kludgy.

In other words, just because I can run ESX on SATA disks local to the host doesn’t mean I should. I haven’t found a compelling business reason to have local VMFS on SATA in these times of inexpensive NAS. However, the pleasant side effect of this dilemma is that until I come up with a business case for using SATA as primary storage for an ESX server, I have a super cheap host that I can test various tweaks and changes to ESX before I roll them up into a QA or production environment.  Maybe that’s reason enough to have one or two ESX deployments on SATA.

October 30, 2008  4:31 PM

Rackable to build cloud hardware to order

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Rackable Systems added another prong to its storage line today, after revealing its intention to divest in the RapidScale clustered file system it bought with TeraScale in 2006 and partner with NetApp in recent months. Rackable’s latest approach is to build customized  high-density server/storage systems for clients in the cloud.

The new CloudRack system is available in either half-rack (22U) or full-rack (44U) configurations and is built using 1U “tray” servers designed by Rackable. The servers hold up to eight 3.5-inch SAS or SATA II hard drives for a maximum of 352 TB in a 44U system. The servers are “rack-focused,” which means they use one fan and power supply system attached to the rack rather than each containing their own power and cooling components. This also leaves room for disks to be horizontally mounted so they can be popped out for service. All the wiring is on the front of the box as well. “No screwdrivers needed,” Rackable director of server products Saeed Atashie said.

While there are minimum and maximum configurations, each system will be custom-built depending on their particular customer  requirements.   

Hardware-wise, this box ties in with IDC’s recommendations to storage vendors about building systems for the cloud as discussed in the Enterprise Disk Storage Consumption Model report I posted about here yesterday. However, unlike other systems that have been marketed for the cloud, CloudRack doesn’t offer its own logical abstraction between the hardware and software elements for centralized management or its own clustered file system.

That said, there’s no shortage of storage software vendors in the market today selling products that convert groups of industry standard servers into something else, whether a clustered NAS system, a parallel NAS system or a CAS archive. Rackable is willing to work with customers to give CloudRack the “personality” they desire, but officials were cagey when it came to saying what specific applications or vendors will be supported. The question is not whether or not the technology would work, though, according to Atashie, but a matter of who would service and support the account, which would also be worked out “on a case-by-case basis.”

The product will become generally available today. When I asked about pricing, guess what the answer was…


October 30, 2008  8:06 AM

Brocade gets discount on Foundry

Dave Raffo Dave Raffo Profile: Dave Raffo

 When Foundry Networks delayed its shareholders meeting last week to vote on the proposed Brocade deal, there was speculation that A) Brocade wanted to renegotiate the price, or B) it had problems raising $400 million in high-yield bonds to help finance the deal.

Turns out the likely answer was C) both. It’s now clear Brocade did want to renegotiate, and the two companies said Wednesday night they agreed to reduce the price. The new price – $16.50 per share – comes to a total of $2.6 billion. That just happens to be $400 million less than the original purchase price of $19.25 or $3 billion. Now Brocade doesn’t have to come up with the extra financing.

Foundry shareholders are now scheduled to vote on the deal Nov. 7.


October 29, 2008  2:41 PM

NetApp’s Dave Hitz says Sun ‘dragging its feet’ in patent lawsuit

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

About three weeks ago, Sun general counsel Mike Dillon posted about the results of some pre-trial maneuvering between his company and NetApp over the patent-infringement suit brought by NetApp against Sun over ZFS. Dillon was jubilant over the Patent and Trade Office rejecting three of NetApp’s claims, and the trial court agreeing to pull one of those claims off the table for consideration in the suit. Not a decisive victory for Sun, but not necessarily good news for NetApp, which is seeking damages and an injunction against Sun’s distribution of ZFS and other products that it claims infringe on its patents.

NetApp co-founder Dave Hitz put up a brief response to Dillon’s comments this past Sunday on his blog, in which he attempts to introduce some nuance into the PTO aspect of this issue. “The Patent Office has issued a preliminary rejection of claims in 3 of our patents (out of 16),” Hitz writes. “Such a ruling is not unusual for patents being tried for the first time, and there are two ways to resolve the issue.” One of them is to wait for the PTO to make a ruling on each case, which Hitz calls “the slow way.”

The fast way would be to just proceed with the trial, which Hitz pushes for in his post. “Dillon mentioned issues with three patents, but NetApp currently has 16 WAFL patents that we believe apply to ZFS, with more on the way,” he wrote. “We believe that we have a strong case, and we want to get it resolved.”

He says Sun’s push for the slow way of resolving the dispute indicates the weakness of its position in the case: “To me, the best indicator of strength is to look at which party wants to get on with the case (the one with a strong position), and which party consistently drags its feet and tries to delay (the one with the weak position).”

Hitz’s post doesn’t offer much in the way of new raw information on the proceedings.


October 29, 2008  12:26 PM

IDC: Unstructured data will become the primary task for storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

According to a new IDC Enterprise Disk Storage Consumption Model report released this week, transaction-intensive applications are giving way as the main segment of enterprise data to an expanded range of apps as well as a tendency to create more copies of data and records for business analytics including data mining and e-Discovery.

The report estimates that unstructured data in traditional data centers will eclipse the growth of transaction-based data that until recently has been the bulk of enterprise data processing. While transactional data is still projected to grow at a compound annual growth rate of 21.8%, it’s far outpaced by a 61.7% CAGR predicted for unstructured data in traditional data centers.

“In the very near future, the management and organization of file-based information will become the primary task for many storage administrators in corporate datacenters,” the report reads. “And this shift will have a significant impact on how companies assess storage solutions in terms of systems’ performance, operational efficiency, and file services intelligence.”

The IDC report also builds on research first highlighted in an IDC blog last week concerning the cloud. According to the report, the sharpest growth in storage capacity will come from new organizations described as “content depots.” IDC estimates storage consumption from these organizations will grow at a compound annual growth rate of 91.8% through 2012. Examples of content depots  include the usual cloud suspects: Google, Amazon, Flickr, and YouTube.

These content depots have different IT requirements and infrastructures than traditional enterprise data centers. We’re seeing examples of these new  infrastructures pop up in the market, including systems with logical abstraction between the hardware and software elements; the use of commodity servers as a hardware basis for storage platforms; and the use of clustered file systems.

Some in the industry have compared this “serverization” of storage to the transition between proprietary workstations and PCs in the 1980′s. But IDC analyst Rick Villars says this isn’t a zero-sum game. “This isn’t going to replace traditional IT,” he said. “Ninety-five percent of what people are developing and building in the storage industry today is irrelevant to what the cloud is building. You could take that as a negative, but it also translates into opportunity. These are new market spaces and new storage consumers that weren’t around five years ago.”

There’s been a lot of discussion lately about the role the cloud will play as the global economy softens. There is a difference of opinion between those who see a capital-strapped storage market as an even more conservative and risk-averse one and those who argue the opportunity to avoid capital expenditures will nudge traditional IT applications into the cloud. Still others point out the hurdles to cloud computing that remain, including network scalability and bandwidth constraints.

For example, when it comes to storage applications such as archiving, analyst reports from Forrester Research this year cited  latency in accessing off-site archived messages and searching them for e-discovery as major barriers to adoption for archiving software-as-a-service (SaaS) offerings.

Cloud computing “definitely exposes weaknesses in networking,” Villars said, but “the closest point to the end user is the cloud, if you want to distribute content to end users spread around the world.”

Other challenges include the growing pains major cloud infrastructures such as Amazon’s S3 have experienced over the last 18 months, and the potential risk of putting more enterprise data eggs in one service provider’s cloud data center basket. Villars points out, “I doubt Amazon has had more problems than a typical large enteprise, and they offer backup with geographic distribution for free.”

However, geographic distribution brings with it its own challenges, such as varying regulations among different countries. “There are regulatory problems with Europe,” Villars said. “Laws there say that if you have data on a European customer, yhou can’t move it out of Europe. If you want your cloud provider to spread copies between the U.S., Asia and Europe for global redundancy, that becomes an issue.”


October 28, 2008  4:01 PM

STORServer gives C-suite a forklift upgrade

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Data protection appliance vendor STORServer has a new management team, replacing president and CEO John Pearring with a president and a separate CEO, both promoted from inside the company.

Chief operating officer Laura Buckley takes over as president and Bob Antoniazzi moves up from VP of business development to CEO.   Buckley will continue as COO for the Tivoli Storage Manager (TSM)-based backup appliance maker while working with the company’s directors. Antoniazzi will be responsible for exploring new market opportunities and business directions for the company, according to a STORServer press release that said Pearring left for personal reasons.

Antoniazzi told me today that the most likely way for the company to expand its products is to flesh out a line of virtual server appliances. STORServer added a virtual instance of its TSM-based backup software inside its hardware appliance a few months ago, but has yet to make it available without hardware. “Right now, we’re being cautious,” he said. “We’re shipping a physical appliance with a virtual appliance inside, tweaked and optimized according to customer requirements for performance and reliability–we can’t just say, ‘Here you go, put this on your own ESX server and good luck.’ I don’t think it’s responsible to do that now.” But that’s the goal eventually.

STORServer also recently announced support for email archiving, but Antoniazzi said “I don’t see us going and doing more new technologies for new technologies’ sake. Whatever we ship, we support, and we have to make sure our support organization is prepared on anything we add.”

Customers have also incquired about remote replication, data deduplication and support for cloud computing. “These are ideas that we will be investigating, but they’re not going out the door anytime soon,” he said.

Enterprise Strategy Group analyst Lauren Whitehouse said STORServer’s got the right idea by adding virtual servers and email archiving features into the mix, but “I’m not sure they have the opportunity to adopt an all-virtual-appliance strategy. I don’t know what limitations might exist for distributing the OS their applications rely on. But it would be a good step for the lower end of the market. They may also have an opportunity to package up a solution for ROBOs, maybe using IBM’s FastBack.”

She added, “The other thing that is missing for them is just general awareness.  They are a great self-sustaining company with a decent channel, but have relatively low visibility in a crowded market.  Unfortunately, now it’s a tough economy to make big investments in that way.”


October 28, 2008  8:56 AM

NetApp cancels user conference, releases dedupe VTL

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

NetApp was supposed to hold its first-ever user conference, called NetApp Accelerate, in February, but yesterday put out a press release saying the conference has been cancelled.

“We had more customer interest in NetApp Accelerate than we anticipated,” said Elisa Steele, senior vice president, Corporate Marketing, in a statement. “But those same customers told us their travel budgets were being cut and it was difficult to commit to attending in today’s climate of economic uncertainty. For those reasons, we decided to cancel this year’s program.”

Wachovia financial analyst Aaron Rakers wonders if NetApp cancelled the conference to trim its own budget.

“While it is clear that economic conditions are resulting in more stringent expense controls at enterprises, we do find this as interesting; we believe possibly a result of NetApp’s own focus on operating expense control,”Rakers wrote in a note to clients.

NetApp said it will be release technical content that had already been prepared for the show between February and May next year.

Today, NetApp said its long-awaited data deduplication feature for its virtual tape library product has finally arrived. The feature, like NetApp’s primary storage dedupe, will be free for new and existing customers. NetApp has taken a contrarian approach to dedupe. It was the first major storage vendor to offer dedupe for primary data — building the capability into its operating system — but the last of the VTL vendors to add dedupe for backup.


October 27, 2008  6:19 PM

IBM takes on Mac backup

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

IBM and partner Effigent have released a co-developed product for backing up Mac desktops and laptops. Called CDP4Mac, an Apple OS X version of IBM’s CDP for Files desktop / laptop data backup software. Like the earlier Windows version of CDP for Files, CDP4Mac tracks changes to workstation files and can upload them to a USB device, centralized server or a designated URL when connected to a network. Effigent added the Mac interface and ability to recognize the Mac file system structure.

Apple has its own near-CDP backup product for OS X, called Time Machine, but an IBM spokesperson said Effigent and IBM had Apple’s support, including testing assistance, because CDP4Mac can also be used to backup Windows data if a Mac is running both operating systems without the need for separate clients. CDP4Mac can also do single instancing across files from both OSes.

This puts IBM into fresh competition with EMC, which offers both Retrospect and versions of its Mozy backup SaaS that support Mac, as well as Atempo’s LiveBackup CDP product. “There aren’t many solutions out there that support both Mac and PC,” said Enterprise Strategy Group analyst Lauren Whitehouse. “Apple is very tuned to the Apple user.”

She added, “In the corporate environment, users first look to a storage vendor or a familiar partner for backup, rather than Apple, even if they’re running Macs.”


October 24, 2008  3:52 PM

Foundry delays vote on Brocade deal

Dave Raffo Dave Raffo Profile: Dave Raffo

Foundry Networks today abruptly postponed its shareholders vote on its pending acquisition by Brocade, raising questions about whether the $3 billion deal will go through.

Brocade said on July 21 it would buy Ethernet switch vendor Foundry to expand its data center presence. Foundry shareholders were scheduled to vote on the deal today, but the company issued a press release saying the meeting was pushed back to next Wednesday because of “recent developments related to the transaction.”

Foundry did not say what those developments were, and Brocade spokesman John Noh said he could not comment. In a note to his clients, financial analyst Aaron Rakers of Wachovia Capital Markets wrote that Foundry investors are worried that Brocade either hasn’t been able to raise the $400 million in funding to go with the $1.1 billion loan it secured two weeks ago, or is trying to renegotiate terms of the deal.

“It is very hard for us to judge the outcome at this point, but we do believe Brocade has been very committed to the transaction and we believe investors could have meaningful questions on Brocade’s long-term growth story without this acquisition,” Rakers wrote.


October 24, 2008  1:24 PM

Dell prepares for converged networks, FCoE and iSCSI

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

During a conference call with storage reporters today to discuss the future for data center networking, Dell senior storage manager Eric Endebrock pointed to the convergence of Ethernet and Fibre Channel as inevitable. “Change is afoot,” he said. “FCoE is a more straightforward management infrastructure–the next generation of intercommunication for Fibre Channel.”

No suprise there. Practically every FC storage vendor is saying that. But where it gets tricky with Dell is, it dropped $1.4 billion in iSCSI SAN vendor EqualLogic less than a year ago. And where EqualLogic’s PS Series iSCSI SAN arrays fit into the converged picture isn’t clear yet.

“Protocols will not necessarily be the top factor in choosing the next storage system for customers,” Endebrock said. “We get caught up in the latest cool technology trend on [the vendor and press] side, but customers don’t necessarily care about that.” He added that lossless Ethernet “will float all storage boats” and that “customers see a place for all protocols.”

Also, “linking EqualLogic to iSCSI is probably not the best way to think about it–we also provide a scaling architecture and solve higher customer needs–it’s far more than just a protocol discussion.”

So far, Dell spokespeople aren’t willing to go into further detail about what its exact plan is for EqualLogic. “We continue to investigate our options and will support 10 Gigabit Ethernet as well as Data Center Ethernet with EqualLogic. We’re going to watch our customers’ needs and what the customers want,” Endebrock said.

A presentation at Storage Networking World titled “Yes, Fibre Channel and iSCSI Can Coexist” by director of global storage and network marketing Praveen Asthana, offered some clues about how Dell sees it all fitting together. “Mixed is in,” Dell’s Asthana said.  But he identified Ethernet as the glue–whether it’s providing the base layer of the unified network or providing a simple management and monitoring interface for all endpoints on an IP network.

While traditional Fibre Channel offers better performance for business applications than traditional iSCSI, it also offers better performance for streaming applications and high-performance computing (HPC) workloads, Asthana pointed out. But he also projected scale-out iSCSI, especially with 10 GbE, will surpass the performance offered by both earlier protocols.

FCoE is still  a topic that remains in the eye of the beholder. An attempt by FC vendors to stay relevant against 10 GbE? Or Ethernet taking over the data center? Depends on who you talk to.

Bottom line: Dell will support FC as long as it supports Clariion. Endebrock was mostly mum when it came to the relationship with EMC, as addressed by EMC CEO Joe Tucci in the company’s third-quarter earnings call on Wednesday. “Joe actually laid out that we have a great relationship and we’re actively working together on how to go to market on the best way possible, working on fitting our product lines together. We’re going back to basics and at the ground level refocusing on where we’ve seen success in the past.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: