Storage Soup


November 5, 2008  11:55 AM

HP jumps into storage virtualization with midrange arrays

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

HP rolled out the StorageWorks SAN Virtualization Services Platform (SVSP)  this morning, described in a press release as “a network-based storage platform that pools capacity across HP and non-HP storage hardware. It works together with the HP StorageWorks Modular Smart Array (MSA) and the HP StorageWorks Enterprise Virtual Array (EVA) as well as a number of third-party arrays.”

HP can now offer storage virtualization across its product line; it previously offered it on the XP systems thanks to an OEM deal with Hitachi. SVSP is based at least partly on LSI’s Storage Virtualization Manager (SVM) software. LSI said at SNW last month that it would be revealing a new partnership for SVM in about a month. An HP spokesperson told me that “we’ve worked with [LSI]  on join development…[this is] not a straight OEM.” HP’s release says its SVSP features include online data migration, thin provisioning, and data replication.

We’ll have complete details on SVSP on our news page later today.

November 4, 2008  4:45 PM

You know things are bad for Sun when…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

…someone goes to the trouble of creating puppet theater to express the depths of their angst about the company’s most recent earnings reports (and open-source business model).

I’ve only been in the storage industry a few years, but I’m pretty sure a CEO puppet talking about open-source ponytails is a first for the IT world.


November 4, 2008  10:25 AM

Overland’s backed up against the wall

Dave Raffo Dave Raffo Profile: Dave Raffo

 Overland Storage is running out of time in its attempt to transform itself from a tape vendor to a disk system vendor. 

Overland has been losing money for years, and last quarter’s loss of $6.9 million left it with $5.4 million in cash. The vendor is betting its future on the Snap Server NAS business it acquired from Adaptec in June and disk backup products, but needs funding to stay alive. On its earnings conference call, CEO Vern LoForti said he hopes to raise $10 million in funding to keep the company going. But getting $10 million in today’s credit crunch isn’t so easy, and Overland has been trying to raise money for at least three months.

In an email to SearchStorage this week, LaForti indicated funding could be coming soon.

“As we indicated in our conference call, we are in discussion with a number of financing institutions concurrently,” he wrote. “We have various letters of intent in hand, and a number of the institutions are currently processing documents for our signature.  As we get closer to closing, we will select the one that is most advantageous to Overland.  We will publicly announce when the deal is closed.  This is #1 on our priority list.”

It has to the top priority, if Overland is to survive. The 10-Q quarterly statement Overland filed with the SEC last week paints a bleak picture:

Possible funding alternatives we are exploring include bank or asset-based financing, equity or equity-based financing, including convertible debt, and factoring arrangements. … Management projects that our current cash on hand will be sufficient to allow us to continue our operations at current levels only into November 2008. [Emphasis added]. If we are unable to obtain additional funding, we will be forced to extend payment terms to vendors where possible, liquidate certain assets where possible, and/or to suspend or curtail certain of our planned operations. Any of these actions could harm our business, results of operations and future prospects. We anticipate we will need to raise approximately $10.0 million of cash to fund our operations through fiscal 2009 at planned levels.

Since it’s already November, the clock is ticking.

   


November 3, 2008  12:52 PM

ESX and SATA: So happy together?

Tskyers Tory Skyers Profile: Tskyers

I’m a big fan of SAS. I’ve professed my undying love and devotion to it (at least until solid-state disk becomes just a little more affordable). So why on earth would I be writing about putting VMware’s ESX Server on SATA disks?

I was poking around on the Internet a few weeks back and came across a deal I couldn’t possibly refuse: a refurbished dual Opteron server with 4 GB of RAM and four hard drive bays (one with a 400 GB drive) with caddies and a decent warranty for $229. No, that’s not a typo!

The downside is that the server is all SATA, and ESX won’t install on some generic SATA controllers attached to motherboards, or even some add-in SATA cards, without some serious cajoling and at least hunting around on VMware’s support forum. These hacks are not officially (or even unofficially) supported. In fact, VMFS is explicitly NOT supported on local SATA disk(scroll to the bottom of page 22). There are exceptions to this and there is work on a SATA FAQ where you can get more info.

When my server arrived, I installed an Areca SATA RAID card in it, downloaded the beta Areca driver for ESX 3.5 update 1 and went about the install. Areca is my favorite SATA/SAS add-in card manufacturer. Their cards are stable and have proven to be the fastest cards in their class for the money.

Why not use LSI, you may ask, especially considering that their driver is in the ESX kernel? Well, I like speed, and I usually don’t make compromises when it comes to the performance of a storage subsystem unless it is absolutely, positively required due to some other dependency. (As a side note, I have LSI  installed in my Vista x64 desktop due to a lack of x64 driver availability for my Areca card at the time of install. See my Windows on SAS blog post for more info.)

The Areca driver install went smoothly, and I currently have Exchange 2007, a Linux email server, a Windows 2003 domain controller, a Windows 2008 domain controller, a Windows XP desktop, and an OpenSolaris installation on the 1.2 TB of local SATA storage. So far, things look strong and stable with the driver in place. Normally, I try to stay away from needing a driver disk at install time to get to storage, as it makes recovery a nightmare if you are forced to use a live CD to repair your system. But this process was so stable that I may need to add exceptions to my prejudices. . . .

I’ve had acceptable performance (according to my subjective non-scientific benchmarking) with the relatively small number of VMs I have deployed on this server, so all in all, the storage subsystem is performing admirably.

This leads me to the question: Is SATA in ESX’s future? I wonder, seeing that Xen has native support for a myriad of storage subsystems, and Windows Hyper-V will install on SATA. I can’t see VMware being able to hold the line of local VMFS on SCSI/SAS only for much longer.

Couple this with storage Vmotion and it leads to more questions about the storage subsystems used for VMware and how it affects VMware’s already strained relationship with storage vendors. On the other side of that coin, why isn’t Microsoft or Citrix taking any heat for allowing the use of onboard SATA attached to cheap integrated controllers? Have the generic onboard storage controllers (Marvell and Silicon Image come to mind) reached the point where they are deemed capable of handling heavy server I/O loads? I’m sure they’ve advanced but I’m not convinced they can handle the storage I/O loads that a server with 20 VMs would generate.

But I have a bigger dilemma than all of this. How would I free the terabytes of storage I’d have trapped in these servers if I used eight 1 TB SATA drives? ESX isn’t exactly the platform of choice for a roll-your-own NAS. While it is technically possible for one to install an NFS server daemon on an ESX host, doing so may not be such a good idea. I’ve seen posts on how to compile and install the NFS module on an ESX host, but I haven’t seen any hard documentation as to the overall performance of the system when serving up VMFS via NFS and host/guest VMs simultaneously. You could also create a guest and turn that guest into an iSCSI or NFS target and point your ESX server to it, but that is also a bit kludgy.

In other words, just because I can run ESX on SATA disks local to the host doesn’t mean I should. I haven’t found a compelling business reason to have local VMFS on SATA in these times of inexpensive NAS. However, the pleasant side effect of this dilemma is that until I come up with a business case for using SATA as primary storage for an ESX server, I have a super cheap host that I can test various tweaks and changes to ESX before I roll them up into a QA or production environment.  Maybe that’s reason enough to have one or two ESX deployments on SATA.


October 30, 2008  4:31 PM

Rackable to build cloud hardware to order

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Rackable Systems added another prong to its storage line today, after revealing its intention to divest in the RapidScale clustered file system it bought with TeraScale in 2006 and partner with NetApp in recent months. Rackable’s latest approach is to build customized  high-density server/storage systems for clients in the cloud.

The new CloudRack system is available in either half-rack (22U) or full-rack (44U) configurations and is built using 1U “tray” servers designed by Rackable. The servers hold up to eight 3.5-inch SAS or SATA II hard drives for a maximum of 352 TB in a 44U system. The servers are “rack-focused,” which means they use one fan and power supply system attached to the rack rather than each containing their own power and cooling components. This also leaves room for disks to be horizontally mounted so they can be popped out for service. All the wiring is on the front of the box as well. “No screwdrivers needed,” Rackable director of server products Saeed Atashie said.

While there are minimum and maximum configurations, each system will be custom-built depending on their particular customer  requirements.   

Hardware-wise, this box ties in with IDC’s recommendations to storage vendors about building systems for the cloud as discussed in the Enterprise Disk Storage Consumption Model report I posted about here yesterday. However, unlike other systems that have been marketed for the cloud, CloudRack doesn’t offer its own logical abstraction between the hardware and software elements for centralized management or its own clustered file system.

That said, there’s no shortage of storage software vendors in the market today selling products that convert groups of industry standard servers into something else, whether a clustered NAS system, a parallel NAS system or a CAS archive. Rackable is willing to work with customers to give CloudRack the “personality” they desire, but officials were cagey when it came to saying what specific applications or vendors will be supported. The question is not whether or not the technology would work, though, according to Atashie, but a matter of who would service and support the account, which would also be worked out “on a case-by-case basis.”

The product will become generally available today. When I asked about pricing, guess what the answer was…


October 30, 2008  8:06 AM

Brocade gets discount on Foundry

Dave Raffo Dave Raffo Profile: Dave Raffo

 When Foundry Networks delayed its shareholders meeting last week to vote on the proposed Brocade deal, there was speculation that A) Brocade wanted to renegotiate the price, or B) it had problems raising $400 million in high-yield bonds to help finance the deal.

Turns out the likely answer was C) both. It’s now clear Brocade did want to renegotiate, and the two companies said Wednesday night they agreed to reduce the price. The new price – $16.50 per share – comes to a total of $2.6 billion. That just happens to be $400 million less than the original purchase price of $19.25 or $3 billion. Now Brocade doesn’t have to come up with the extra financing.

Foundry shareholders are now scheduled to vote on the deal Nov. 7.


October 29, 2008  2:41 PM

NetApp’s Dave Hitz says Sun ‘dragging its feet’ in patent lawsuit

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

About three weeks ago, Sun general counsel Mike Dillon posted about the results of some pre-trial maneuvering between his company and NetApp over the patent-infringement suit brought by NetApp against Sun over ZFS. Dillon was jubilant over the Patent and Trade Office rejecting three of NetApp’s claims, and the trial court agreeing to pull one of those claims off the table for consideration in the suit. Not a decisive victory for Sun, but not necessarily good news for NetApp, which is seeking damages and an injunction against Sun’s distribution of ZFS and other products that it claims infringe on its patents.

NetApp co-founder Dave Hitz put up a brief response to Dillon’s comments this past Sunday on his blog, in which he attempts to introduce some nuance into the PTO aspect of this issue. “The Patent Office has issued a preliminary rejection of claims in 3 of our patents (out of 16),” Hitz writes. “Such a ruling is not unusual for patents being tried for the first time, and there are two ways to resolve the issue.” One of them is to wait for the PTO to make a ruling on each case, which Hitz calls “the slow way.”

The fast way would be to just proceed with the trial, which Hitz pushes for in his post. “Dillon mentioned issues with three patents, but NetApp currently has 16 WAFL patents that we believe apply to ZFS, with more on the way,” he wrote. “We believe that we have a strong case, and we want to get it resolved.”

He says Sun’s push for the slow way of resolving the dispute indicates the weakness of its position in the case: “To me, the best indicator of strength is to look at which party wants to get on with the case (the one with a strong position), and which party consistently drags its feet and tries to delay (the one with the weak position).”

Hitz’s post doesn’t offer much in the way of new raw information on the proceedings.


October 29, 2008  12:26 PM

IDC: Unstructured data will become the primary task for storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

According to a new IDC Enterprise Disk Storage Consumption Model report released this week, transaction-intensive applications are giving way as the main segment of enterprise data to an expanded range of apps as well as a tendency to create more copies of data and records for business analytics including data mining and e-Discovery.

The report estimates that unstructured data in traditional data centers will eclipse the growth of transaction-based data that until recently has been the bulk of enterprise data processing. While transactional data is still projected to grow at a compound annual growth rate of 21.8%, it’s far outpaced by a 61.7% CAGR predicted for unstructured data in traditional data centers.

“In the very near future, the management and organization of file-based information will become the primary task for many storage administrators in corporate datacenters,” the report reads. “And this shift will have a significant impact on how companies assess storage solutions in terms of systems’ performance, operational efficiency, and file services intelligence.”

The IDC report also builds on research first highlighted in an IDC blog last week concerning the cloud. According to the report, the sharpest growth in storage capacity will come from new organizations described as “content depots.” IDC estimates storage consumption from these organizations will grow at a compound annual growth rate of 91.8% through 2012. Examples of content depots  include the usual cloud suspects: Google, Amazon, Flickr, and YouTube.

These content depots have different IT requirements and infrastructures than traditional enterprise data centers. We’re seeing examples of these new  infrastructures pop up in the market, including systems with logical abstraction between the hardware and software elements; the use of commodity servers as a hardware basis for storage platforms; and the use of clustered file systems.

Some in the industry have compared this “serverization” of storage to the transition between proprietary workstations and PCs in the 1980’s. But IDC analyst Rick Villars says this isn’t a zero-sum game. “This isn’t going to replace traditional IT,” he said. “Ninety-five percent of what people are developing and building in the storage industry today is irrelevant to what the cloud is building. You could take that as a negative, but it also translates into opportunity. These are new market spaces and new storage consumers that weren’t around five years ago.”

There’s been a lot of discussion lately about the role the cloud will play as the global economy softens. There is a difference of opinion between those who see a capital-strapped storage market as an even more conservative and risk-averse one and those who argue the opportunity to avoid capital expenditures will nudge traditional IT applications into the cloud. Still others point out the hurdles to cloud computing that remain, including network scalability and bandwidth constraints.

For example, when it comes to storage applications such as archiving, analyst reports from Forrester Research this year cited  latency in accessing off-site archived messages and searching them for e-discovery as major barriers to adoption for archiving software-as-a-service (SaaS) offerings.

Cloud computing “definitely exposes weaknesses in networking,” Villars said, but “the closest point to the end user is the cloud, if you want to distribute content to end users spread around the world.”

Other challenges include the growing pains major cloud infrastructures such as Amazon’s S3 have experienced over the last 18 months, and the potential risk of putting more enterprise data eggs in one service provider’s cloud data center basket. Villars points out, “I doubt Amazon has had more problems than a typical large enteprise, and they offer backup with geographic distribution for free.”

However, geographic distribution brings with it its own challenges, such as varying regulations among different countries. “There are regulatory problems with Europe,” Villars said. “Laws there say that if you have data on a European customer, yhou can’t move it out of Europe. If you want your cloud provider to spread copies between the U.S., Asia and Europe for global redundancy, that becomes an issue.”


October 28, 2008  4:01 PM

STORServer gives C-suite a forklift upgrade

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Data protection appliance vendor STORServer has a new management team, replacing president and CEO John Pearring with a president and a separate CEO, both promoted from inside the company.

Chief operating officer Laura Buckley takes over as president and Bob Antoniazzi moves up from VP of business development to CEO.   Buckley will continue as COO for the Tivoli Storage Manager (TSM)-based backup appliance maker while working with the company’s directors. Antoniazzi will be responsible for exploring new market opportunities and business directions for the company, according to a STORServer press release that said Pearring left for personal reasons.

Antoniazzi told me today that the most likely way for the company to expand its products is to flesh out a line of virtual server appliances. STORServer added a virtual instance of its TSM-based backup software inside its hardware appliance a few months ago, but has yet to make it available without hardware. “Right now, we’re being cautious,” he said. “We’re shipping a physical appliance with a virtual appliance inside, tweaked and optimized according to customer requirements for performance and reliability–we can’t just say, ‘Here you go, put this on your own ESX server and good luck.’ I don’t think it’s responsible to do that now.” But that’s the goal eventually.

STORServer also recently announced support for email archiving, but Antoniazzi said “I don’t see us going and doing more new technologies for new technologies’ sake. Whatever we ship, we support, and we have to make sure our support organization is prepared on anything we add.”

Customers have also incquired about remote replication, data deduplication and support for cloud computing. “These are ideas that we will be investigating, but they’re not going out the door anytime soon,” he said.

Enterprise Strategy Group analyst Lauren Whitehouse said STORServer’s got the right idea by adding virtual servers and email archiving features into the mix, but “I’m not sure they have the opportunity to adopt an all-virtual-appliance strategy. I don’t know what limitations might exist for distributing the OS their applications rely on. But it would be a good step for the lower end of the market. They may also have an opportunity to package up a solution for ROBOs, maybe using IBM’s FastBack.”

She added, “The other thing that is missing for them is just general awareness.  They are a great self-sustaining company with a decent channel, but have relatively low visibility in a crowded market.  Unfortunately, now it’s a tough economy to make big investments in that way.”


October 28, 2008  8:56 AM

NetApp cancels user conference, releases dedupe VTL

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

NetApp was supposed to hold its first-ever user conference, called NetApp Accelerate, in February, but yesterday put out a press release saying the conference has been cancelled.

“We had more customer interest in NetApp Accelerate than we anticipated,” said Elisa Steele, senior vice president, Corporate Marketing, in a statement. “But those same customers told us their travel budgets were being cut and it was difficult to commit to attending in today’s climate of economic uncertainty. For those reasons, we decided to cancel this year’s program.”

Wachovia financial analyst Aaron Rakers wonders if NetApp cancelled the conference to trim its own budget.

“While it is clear that economic conditions are resulting in more stringent expense controls at enterprises, we do find this as interesting; we believe possibly a result of NetApp’s own focus on operating expense control,”Rakers wrote in a note to clients.

NetApp said it will be release technical content that had already been prepared for the show between February and May next year.

Today, NetApp said its long-awaited data deduplication feature for its virtual tape library product has finally arrived. The feature, like NetApp’s primary storage dedupe, will be free for new and existing customers. NetApp has taken a contrarian approach to dedupe. It was the first major storage vendor to offer dedupe for primary data — building the capability into its operating system — but the last of the VTL vendors to add dedupe for backup.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: