Storage Soup

A SearchStorage.com blog.


November 17, 2008  4:43 AM

CA goes SaaS route with DR



Posted by: Dave Raffo
data backup, Data center disaster recovery planning, small business storage, Storage backup, Storage Software as a Service

CA jumped into the software as a service (SaaS) game by launching three offerings at CA World. The SaaS offerings include a disaster recovery/business continuity service called CA Instant Recovery On Demand, which is built on technology acquired when CA bought XOsoft in 2006.

CA will sell the service through resellers and other channel partners. A participating reseller will establish a VPN connection between the customer and CA, and use that to automatically fail over a server that goes down. The service supports Microsoft Exchange, SQL Server and IIS, as well as Oracle applications.

Instant Recovery on Demand  costs around $900 per server for a one-year subscription.

Adam Famularo, CA’s general manager for recovery management and data modeling, expects the service to appeal mostly to SMBs because larger organizations are more likely to use the XOsoft packaged software for high availability and replication. “If an enterprise customer says ‘We love this model, too,’ they can buy it,” he says. “But most enterprises want to buy it as a product.”

Famularo says he sees the service more for common server problems than for large disasters. “It’s not just for hurricane season, but for everyday problems,” he says.

November 12, 2008  6:31 PM

Recommended reading about EMC Atmos



Posted by: Beth Pariseau
Cloud storage

Since I’ve been in the storage industry, I can’t think of another product that generated as much hype and interest for as long as EMC’s “Maui,” now rechristened Atmos for its general release.

Now that there’s actual technical nitty-gritty to get into about the product after a year of talk, the Internet is having a field day. There are a few follow-up resources with great details about the product springing up that I want to point out in case you missed them:

  • VMware senior director of strategic alliances Chad Sakac has an intricately detailed post up about how VMware’s vCloud initiative fits in with Atmos, and the differences between what VMware’s doing in the cloud and what Atmos is trying to accomplish.
  • Another EMCer, Dave Graham, covers the details of the hardware (aka “Hulk”) that’s being paired with Atmos.
  • The Storage Architect has questions.
  • Finally, StorageMojo’s Robin Harris has links to all the relevant technical whitepapers for anyone looking to truly geek out to their heart’s content. Harris also points out the highlights for those who don’t make whitepapers typical bedside reading.

I have been skeptical about cloud computing in the past, and remain that way to a certain extent. While I have no doubt about the ability of storage vendors and IT departments to put together huge, booming, honking service provider infrastructures, I still think that those pesky details about how users–especially users with a lot of data–are going to get their data there and back in a reasonable period of time have not been addressed. Some of my colleagues, like SearchServerVirtualization’s Alex Barrett,  have seen this movie before, and “[wonder] why hosting providers think that cloud storage will succeed when storage service providers (SSPs) of the late 1990s were such a blatant failure?”

For many, the answer to that question is that today’s technology has advanced since the 1990s. Chris Gladwin, founder of Atmos competitor Cleversafe, told me that he’s had the idea for his cloud offering for years, but the average Ethernet network was nowhere near up to speed. Now it’s more like ‘near.’ And just yesterday I wrote about one startup, Linxter, that’s trying to do something about one piece of the networking equation. It may be that the technology’s finally ready, even if the idea doesn’t look much different.

It has also been suggested to me by IDC’s Rick Villars that the storage industry thinks of the cloud as an effort to replace all or part of an enterprise data center, but that cloud service providers actually represent a net new mass of storage consumers. It might be that both skeptics and cloud computing evangelists are exactly right, but they’re talking about differing markets.

When it comes to the enterprise, though, I think the cloud will have its uses–not necessarily in deployments but in giving rise to products like Atmos. The enterprise has been looking for a global, autonomic, simple, logical means of moving data based on its business value, not to mention cheap, simple, commodity storage hardware components, for quite a few years now. I’ve written stories before prompted by the wistful look in enterprise storage admins’ eyes when they hear about automation like Compellent’s Data Progression feature for tiered storage.  Whether or not the external cloud business takes off like people think it will, it looks like some longstanding enterprise storage wish list items might still be checked off.


November 11, 2008  11:02 AM

Startup looks to become ‘lingua franca’ of the cloud



Posted by: Beth Pariseau
Cloud storage

Most of the time when concepts come along like the cloud, they’re discussed first from a 30,000 foot, theoretical point of view. As they take shape, though, pragmatic nuances come into play. After looking at a map, you still have to get from point A to point B.

The wider economy is dampening some appetites for innovation, but the cloud is rolling on, and new companies are popping up to solve some of the new logistical problems presented by its evolution. For one thing, the frailties of today’s Internet networks have come up a lot in more pragmatic discussions of the cloud. While companies like EMC and Sun are offering advanced kits to service providers, customers still face limited bandwidth and at times lossy networks getting their data uploaded to the cloud.

I met with one new company looking to address these issues last week. Called Linxter, it’s looking to sell software to service providers that places an agent at each end of the wire between cloud and customer. The software agent for the customer side would be embedded into whatever software the service provider already has them use. These dual software agents then can act as a kind of universal adapter for sending data over the network to the cloud data center, reducing protocol chatter, improving latency, allowing for on-demand and scheduled sending and receiving of data. They automatically re-transmit data if a connection is lost, picking back up where the transmission dropped off. This can be key for sending backup streams to the cloud, for example. The software will also be pre-packaged with various devices so that cloud service providers don’t have to re-engineer software for each new endpoint.

According to founder and CEO Jason Milgram, the product’s third public beta was released Oct. 3. The first commercial release will be available on the company’s website in mid-December. “The communication layer is a very high level skill set, and our company has those skills,” Milgram said. “Our technology takes care of the complexity.”

To date the company has been funded to the tune of $3 million byt angel investors and is pursuing a channel/partner sales strategy. Milgram couldn’t name any partners, but said there will be at least 10 listed once the commercial release comes out. So far of the approximately 50 public betas, most are systems integrators and ISVs. There have been at least 300 downloads of the Linxter middleware since May, he said.


November 6, 2008  2:26 PM

Harvard Law School offers clarifications on lost backup tape



Posted by: Beth Pariseau
data backup, data security

The Boston Globe reported this morning that an unencrypted backup tape containing personal information on some 21,000 clients of the school’s legal clinic has been lost. According to the newspaper, the tape was lost by a technician who was transporting it on the subway.

The Globe story also reports:

To prevent a similar occurrence in the future, the law school is encrypting the center’s computer servers and backup tapes for a higher level of protection beyond the password. It has bought a new tape library with a bar-code reader for better inventory control and hired a professional courier service to transport the backup tapes.

School spokesperson Robert London told me this afternoon that the Globe story “gives the impression” that the law school has determined where and how the tape was lost, but that’s not the case. ”It’s possible it was lost in transit on the MBTA, but it could have been lost after it reached our campus,” he said. The Globe story does not cite a specific source for that information.

London added that the tape was coming from a remote office that was about to become the last branch of the law school to deploy tape encryption, and said the rest of the school’s facilities already have encryption in place. To lose a backup tape from that particular system was “just bad timing and bad luck,” he said.


November 6, 2008  1:41 PM

What if the cloud is really a brain?



Posted by: Beth Pariseau
Around the water cooler, Cloud storage

Photobucket

The idea that the human race has a global brain or a composite consciousness isn’t a new one. It’s at least as old as the Transcendentalist movements of the 1800′s, and the rise of computer technology has long sparked imagination about the possibilities for such universal connection made literal. The frequent recurrence of the idea among varying groups and individuals might even be considered evidence that such a superconsciousness exists. Creepy.

One of the most recent variations of this idea is currently making its way around the Internet, in the form of an essay by Kevin Kelly titled “Evidence of a Global Superorganism.” In it, Kelly draws on those concepts of collective consciousness, and postulates that the Internet/cloud is in itself a distributed, virtual, collective consciousness.

But more importantly, Kelly argues, the particular consciousness “emerging from the cloak of wires, radio waves, and electronic nodes wrapping the surface of our planet,” isn’t actually our own.

This megasupercomputer is the Cloud of all clouds, the largest possible inclusion of communicating chips. It is a vast machine of extraordinary dimensions. It is comprised of quadrillion chips, and consumes 5% of the planet’s electricity. It is not owned by any one corporation or nation (yet), nor is it really governed by humans at all. Several corporations run the larger sub clouds, and one of them, Google, dominates the user interface to the One Machine at the moment.

None of this is controversial. Seen from an abstract level there surely must be a very large collective virtual machine. But that is not what most people think of when they hear the term a “global superorganism.” That phrase suggests the sustained integrity of a living organism, or a defensible and defended boundary, or maybe a sense of self, or even conscious intelligence.

…It starts out forming a plain superorganism, than becomes autonomous, then smart, then conscious. The phases are soft, feathered, and blurred. My hunch is that the One Machine has advanced through levels I and II in the past decades and is presently entering level III.

This idea is familiar, and maybe a little bit frightening if you’ve read a lot of science fiction or seen The Matrix. Although a recent online survey found that people are less afraid of intelligent machines than “how humans might use the technology.”


November 5, 2008  12:41 PM

Self-healing disk array maker Atrato names new CEO



Posted by: Beth Pariseau
Strategic storage vendors

Atrato Inc. made a splash earlier this year with its no-maintenance disk array (which was quickly followed to market by Xiotech’s somewhat similar ISE product) and has been relatively quiet since then. Around the time of the product launch, some industry watchers urged the startup to polish its messaging because there were inconsistencies between information on the company’s website and information provided to the press and analysts.

Now it appears Atrato is moving into a more marketing-intensive phase with the promotion of former executive vice president of sales and marketing Steve Visconti to president and CEO. Company founder and former CEO Dan McCormick “has relinquished a day-to-day operational role in his new position of Chairman of the Board,” according to an Atrato press release.

One can imagine a range of scenarios – some good, some not – that would lead to this move,” Data Mobility Group analyst Robin Harris said in an email this morning. “My sense is that this is a normal progression for a company moving from an engineering/evangelizing mindset to a marketing/selling mindset. I believe they and Xiotech have a unique value proposition. Now is the time for them to flog it hard.”


November 5, 2008  11:55 AM

HP jumps into storage virtualization with midrange arrays



Posted by: Beth Pariseau
Storage

HP rolled out the StorageWorks SAN Virtualization Services Platform (SVSP)  this morning, described in a press release as “a network-based storage platform that pools capacity across HP and non-HP storage hardware. It works together with the HP StorageWorks Modular Smart Array (MSA) and the HP StorageWorks Enterprise Virtual Array (EVA) as well as a number of third-party arrays.”

HP can now offer storage virtualization across its product line; it previously offered it on the XP systems thanks to an OEM deal with Hitachi. SVSP is based at least partly on LSI’s Storage Virtualization Manager (SVM) software. LSI said at SNW last month that it would be revealing a new partnership for SVM in about a month. An HP spokesperson told me that “we’ve worked with [LSI]  on join development…[this is] not a straight OEM.” HP’s release says its SVSP features include online data migration, thin provisioning, and data replication.

We’ll have complete details on SVSP on our news page later today.


November 4, 2008  4:45 PM

You know things are bad for Sun when…



Posted by: Beth Pariseau
Storage

…someone goes to the trouble of creating puppet theater to express the depths of their angst about the company’s most recent earnings reports (and open-source business model).

I’ve only been in the storage industry a few years, but I’m pretty sure a CEO puppet talking about open-source ponytails is a first for the IT world.


November 4, 2008  10:25 AM

Overland’s backed up against the wall



Posted by: Dave Raffo
Storage

 Overland Storage is running out of time in its attempt to transform itself from a tape vendor to a disk system vendor. 

Overland has been losing money for years, and last quarter’s loss of $6.9 million left it with $5.4 million in cash. The vendor is betting its future on the Snap Server NAS business it acquired from Adaptec in June and disk backup products, but needs funding to stay alive. On its earnings conference call, CEO Vern LoForti said he hopes to raise $10 million in funding to keep the company going. But getting $10 million in today’s credit crunch isn’t so easy, and Overland has been trying to raise money for at least three months.

In an email to SearchStorage this week, LaForti indicated funding could be coming soon.

“As we indicated in our conference call, we are in discussion with a number of financing institutions concurrently,” he wrote. “We have various letters of intent in hand, and a number of the institutions are currently processing documents for our signature.  As we get closer to closing, we will select the one that is most advantageous to Overland.  We will publicly announce when the deal is closed.  This is #1 on our priority list.”

It has to the top priority, if Overland is to survive. The 10-Q quarterly statement Overland filed with the SEC last week paints a bleak picture:

Possible funding alternatives we are exploring include bank or asset-based financing, equity or equity-based financing, including convertible debt, and factoring arrangements. … Management projects that our current cash on hand will be sufficient to allow us to continue our operations at current levels only into November 2008. [Emphasis added]. If we are unable to obtain additional funding, we will be forced to extend payment terms to vendors where possible, liquidate certain assets where possible, and/or to suspend or curtail certain of our planned operations. Any of these actions could harm our business, results of operations and future prospects. We anticipate we will need to raise approximately $10.0 million of cash to fund our operations through fiscal 2009 at planned levels.

Since it’s already November, the clock is ticking.

   


November 3, 2008  12:52 PM

ESX and SATA: So happy together?



Posted by: Tskyers
disk drives, VMware

I’m a big fan of SAS. I’ve professed my undying love and devotion to it (at least until solid-state disk becomes just a little more affordable). So why on earth would I be writing about putting VMware’s ESX Server on SATA disks?

I was poking around on the Internet a few weeks back and came across a deal I couldn’t possibly refuse: a refurbished dual Opteron server with 4 GB of RAM and four hard drive bays (one with a 400 GB drive) with caddies and a decent warranty for $229. No, that’s not a typo!

The downside is that the server is all SATA, and ESX won’t install on some generic SATA controllers attached to motherboards, or even some add-in SATA cards, without some serious cajoling and at least hunting around on VMware’s support forum. These hacks are not officially (or even unofficially) supported. In fact, VMFS is explicitly NOT supported on local SATA disk(scroll to the bottom of page 22). There are exceptions to this and there is work on a SATA FAQ where you can get more info.

When my server arrived, I installed an Areca SATA RAID card in it, downloaded the beta Areca driver for ESX 3.5 update 1 and went about the install. Areca is my favorite SATA/SAS add-in card manufacturer. Their cards are stable and have proven to be the fastest cards in their class for the money.

Why not use LSI, you may ask, especially considering that their driver is in the ESX kernel? Well, I like speed, and I usually don’t make compromises when it comes to the performance of a storage subsystem unless it is absolutely, positively required due to some other dependency. (As a side note, I have LSI  installed in my Vista x64 desktop due to a lack of x64 driver availability for my Areca card at the time of install. See my Windows on SAS blog post for more info.)

The Areca driver install went smoothly, and I currently have Exchange 2007, a Linux email server, a Windows 2003 domain controller, a Windows 2008 domain controller, a Windows XP desktop, and an OpenSolaris installation on the 1.2 TB of local SATA storage. So far, things look strong and stable with the driver in place. Normally, I try to stay away from needing a driver disk at install time to get to storage, as it makes recovery a nightmare if you are forced to use a live CD to repair your system. But this process was so stable that I may need to add exceptions to my prejudices. . . .

I’ve had acceptable performance (according to my subjective non-scientific benchmarking) with the relatively small number of VMs I have deployed on this server, so all in all, the storage subsystem is performing admirably.

This leads me to the question: Is SATA in ESX’s future? I wonder, seeing that Xen has native support for a myriad of storage subsystems, and Windows Hyper-V will install on SATA. I can’t see VMware being able to hold the line of local VMFS on SCSI/SAS only for much longer.

Couple this with storage Vmotion and it leads to more questions about the storage subsystems used for VMware and how it affects VMware’s already strained relationship with storage vendors. On the other side of that coin, why isn’t Microsoft or Citrix taking any heat for allowing the use of onboard SATA attached to cheap integrated controllers? Have the generic onboard storage controllers (Marvell and Silicon Image come to mind) reached the point where they are deemed capable of handling heavy server I/O loads? I’m sure they’ve advanced but I’m not convinced they can handle the storage I/O loads that a server with 20 VMs would generate.

But I have a bigger dilemma than all of this. How would I free the terabytes of storage I’d have trapped in these servers if I used eight 1 TB SATA drives? ESX isn’t exactly the platform of choice for a roll-your-own NAS. While it is technically possible for one to install an NFS server daemon on an ESX host, doing so may not be such a good idea. I’ve seen posts on how to compile and install the NFS module on an ESX host, but I haven’t seen any hard documentation as to the overall performance of the system when serving up VMFS via NFS and host/guest VMs simultaneously. You could also create a guest and turn that guest into an iSCSI or NFS target and point your ESX server to it, but that is also a bit kludgy.

In other words, just because I can run ESX on SATA disks local to the host doesn’t mean I should. I haven’t found a compelling business reason to have local VMFS on SATA in these times of inexpensive NAS. However, the pleasant side effect of this dilemma is that until I come up with a business case for using SATA as primary storage for an ESX server, I have a super cheap host that I can test various tweaks and changes to ESX before I roll them up into a QA or production environment.  Maybe that’s reason enough to have one or two ESX deployments on SATA.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: