UK Data Storage Buzz


November 2, 2011  12:09 PM

SNW Europe: Centripetal/centrifugal forces at work



Posted by: Editor
datacore, dothill, ibm storage, netapp, ontap-v

It’s an axiom of capitalism that the customer should be free to choose what they buy. But it’s not always a right that’s freely given, and businesses prefer to lock you in if they can.

In the consumer IT world Microsoft was forced, for example, to make the use of Internet Explorer a choice rather than an obligation when you bought a PC/operating system. In the business IT world, the last decade or so has seen the liberation of the operating system from the processing hardware, most notably in the case of Unix flavours/RISC chips and their supercession by Windows and Linux on x86 servers.

Spending time talking to storage vendors at SNW Europe, it’s clear there are competing forces at work in storage too. Continued »

October 19, 2011  9:38 AM

VMworld: VMware’s cloud vision has a storage-shaped hole in it



Posted by: Editor
chargeback, cloud storage, private cloud, Vmware, Vmworld

As you arrive at VMware’s VMworld extravaganza in Copenhagen this week, you are left in no doubt about the company’s key message—the cloud—and how VMware will help you achieve it.

So, what does VMware mean by the cloud? Well, it turns out its idea of the cloud has a hole in it when it comes to storage.

Having missed the first day’s keynote due to being above clouds or in airport lounges (ie, I suffered travel difficulties), I asked Michael Adams, VMware senior product marketing man, about the company’s vision.

With regard to applications and their delivery, VMware’s ideas contain all the elements a private cloud requires. With vCloud Director, an organisation can allow its users self-service provision of applications from a catalogue of services and charge them for it. In this it is similar to the selection, delivery and charging of smartphone “apps.”

It’s fair to say that this fits the definition of cloud services but only for access to applications and not storage provision. Adams talked of the numerous ways in which VMware’s vSphere 5 facilitates the management of storage, but he conceded it has no plans to effect the cloud delivery of storage (ie, self-service, from a catalogue, with chargeback) in VMware’s vision. “It’s more about providing the entire VM with its back-end storage as part of the larger picture,” he said.

While VMware’s cloud-as-app delivery method is a slick one, I’m not convinced the world is ready for this model. For the foreseeable future, there will be demand for storage delivered from private clouds in wholesale fashion to departments that require capacity for their applications.

The issue organisations want to solve right now is how to provide storage capacity to users with a minimum of management fuss and to be able to charge departments for that capacity. VMware’s vision seems to leap way ahead of the game.


October 19, 2011  8:28 AM

VMworld/Neverfail: Leading the life of pilot fish



Posted by: Editor
failover, high availability, Neverfail, vcenter server heartbeat, Vmware, Vmworld

While some vendors must look in many directions to keep up with developments in server virtualisation, others lead an apparently less perilous life. Like pilot fish in symbiosis with the shark, their modus operandi allows them to profit from the life of the larger companion.

One such company is Neverfail, which was showing its wares at VMworld in Copenhagen this week. Originally a disaster recovery services company, it transformed into a failover software business many years ago. Its core product set uses agents to monitor the health of server and storage resources and alerts or fails over whole app groups and servers should the need arise.

It clearly works well. In 2009, one of Neverfail’s products was selected to be rebadged by VMware as VCenter Server Heartbeat as the means of providing failover for the critical VMware management tool (usually held, according to best practice, on a physical server).

Therein lies one of Neverfail’s key strengths—it can fail over almost any software from either virtual or physical servers. VMware can only provide failover, using the likes of Site Recovery Manager (SRM), for virtual servers. So, Neverfail has developed a range of tools that complement VMware components, such as SRMXtender, which adds physical machine failover to SRM.

Another key competency is its monitoring capability. It provides this, for example, in its vAppHA tools, which integrates with VMware’s High Availability (adding physical machine failover capability).

Instead of sharks and pilots fish, Neverfail’s Texas office came up with a more American analogy: “VMware builds the pickup trucks; we provide the gun racks and tow bars.”


October 19, 2011  8:02 AM

VMworld/Acronis: The backup game is stretched by server virtualisation



Posted by: Editor
Acronis, backup, Hyper-v, red hat, Vmware, Vmworld

Speaking to SMB/small-enterprise backup vendor Acronis at VMworld in Copenhagen this week, it’s apparent that the backup game is stretched* as a result of the impact of virtual server technology.

For years Acronis had one basic platform—Backup & Recovery—which put backup, data deduplication, disaster recovery and data protection features under one interface. Then it felt rising demand for VMware-specific backup and developed its vmProtect 6 product, released earlier this year at VMworld in the US.

So far, so good. Acronis seems to be a company responding well to its market.

So, how is the game stretched?

Well, there’s the VMware backup scene. This has changed rapidly over the past four or five years. First people backed up virtual machines just like physical machines, then they were able to use VCB’s rather awkward two-stage backup process. Since 2009, with the release of VMware’s APIs for Data Protection, backup products have been able to back up virtual machines pretty seamlessly.

So, vendors like Acronis have had to keep up with these developments and make sure their products are capable in the latest ways of doing virtual machine backup. But, in doing so, they are getting way ahead of the end-user experience and market. SearchStorage.co.UK research earlier this year showed nearly 25% of users are still at the two-stage VCB process, while 20% still back up virtual machines with traditional backup products and agents.

That’s to be expected. End users are rightly conservative. When something works and/or isn’t too painful and you only implemented it 12 months ago, you’re not going to replace it just because the vendors suddenly came up with a better way of doing things.

At the same time, however, we can’t expect VMware to be the only virtualisation game in town forever. Microsoft’s Hyper-V has the advantage of being cheap or free to buy into, and though it currently lacks the ecosystem around it that VMware has, that may well not last. There are other thoroughbreds in the hypervisor market too, like Red Hat.

So, to use the footballing phrase, the game is stretched for backup vendors in this newly virtualised world. Existing markets need to be satisfied. New products need to be anticipated and developed, and it all presents incredible amounts of opportunity and risk.

(*For those not familiar with football [soccer] terminology the game is often said to be “stretched” in its latter stages. Tiredness takes hold, and the teams can’t hold a neat formation any longer. So, instead of all 20 outfield players bunched within easy passing distance, they are stretched up and down the pitch.)


October 14, 2011  8:48 AM

Overland’s DX1/DX2: Why no mirroring, no clustering?



Posted by: Editor
DX1, DX2, mirror, NAS, Overland Storage, parity, RAID

Overland Storage this week launched a pair of NAS/iSCSI boxes with the aim of bringing enterprise-class storage to the SME. A couple of things puzzle me about that.

They’re the DX1 and DX2, a 1U and 2U pair of products that are expandable to 156 TB and 288 TB, respectively. Base units are priced at $1,699 and $3,999. So, no doubt these are affordable pieces of kit for the smaller business.

OK, so here’s what puzzles me. Continued »


September 29, 2011  10:24 AM

QLogic products hedge Fibre Channel over Ethernet bets



Posted by: Editor
Antony Adshead, Brocade, Cisco, director-class, Ethernet, FCoE, Fibre Channel, QLogic, Storage networking, switch

QLogic has announced three new and/or enhanced products this week–-a new HBA/CNA and switch, plus a router.

What’s most notable about these launches is the incorporation of so many storage/networking protocols into single devices; and that reflects the state of flux/inertia of data centre transport protocols. Continued »


September 20, 2011  11:32 AM

Virsto and the exodus of intelligence from storage



Posted by: Editor
boot storm, latency, rotational, VDI, Virsto, virtual storage appliance

Last week I blogged on storage moving away from the array and into the host as a result of the demands of server virtualisation. This week I spoke to a vendor — Virsto — that puts intelligence in the hypervisor to finesse the operation of storage in the array.

Virsto — which recently launched its Virsto for VDI vSphere Edition — attacks the pain point around virtual machine I/O and storage. In other words, its products address the tendency for many virtual machines in a physical server to create lots of random I/O and therefore rotational latency as they all send read and write requests to storage.

What Virsto does is install a virtual storage appliance on each host. This in turn creates a ‘log’ in the array but in front of the primary storage. The log is a bit like a cache-with-intelligence that physically resides in storage media specified by the user and sequentialises random write requests to disk.

The effect of this is that the VDI host gets its write acknowledgement and is happy, while the actual data is filed away later, as a sequential write. The end result of all this is to nullify the ‘boot storm,’ lower latency on VDI writes and free up resources for VDI reads.

Virsto claims a 2x to 3x speed increase and says it can do* for $6 to $8 per gigabyte on a vanilla RAID subsystem what would normally cost $30 per gig on a high-end NetApp FAS 6000 with Fast Cache SSD.

Again, it all goes to show that what counts in storage is the intelligence, the software. And it’s another manifestation of that intelligence moving to the (virtual) server.

Virsto plans to tackle the VMware virtual server market in the coming months.

(*”What it can do” being fast access storage with thin provisioning, snapshots and cloning.)


September 15, 2011  6:28 AM

Fusion IO and the evolution of VM storage



Posted by: Editor
flash, NAND, server cache, server virtualisation, SSD, VDI, virtual desktop

Server virtualisation is responsible for a lot of changes to the world of storage. Initially, it drove a widespread move to shared storage. But now it seems the demands of virtual servers and desktops are driving storage away from the array.

Two recent emerging vendors/products have cited the demands of applications, virtual servers and desktops as drivers for the location of storage right next to the hypervisor. The first is Nutanix, which has come up with a sort of clustered DAS.

The second is Fusion IO, which I spoke to this week. Continued »


September 8, 2011  7:02 AM

Drobo’s sub-£10,000 enterprise-featured SAN



Posted by: Editor
iSCSI, SAN, SMB storage Drobo, thin provisioning, tiered storage

I spoke to Drobo yesterday about their new B1200i and two things stood out for me from the discussion. The first was the extent of enterprise-like features in a box put out by a company associated with the SOHO and desktop storage market. The second was that the cost of the B1200i probably indicates the entry point for SAN storage. Continued »


July 14, 2011  8:50 AM

Caringo white paper musings



Posted by: Editor
caringo, object-based storage, raid 1, raid 5, raid 6

I was forwarded this Caringo white paper recently. It’s called Protect Your Data from RAID. It should actually be called Protect Your Data from RAID 5 and 6.

It, rightly I think, attacks the parity-based Raid levels for being increasingly inefficient in large scale deployments and risking data loss because the likelihood of drive failure increases, particularly during a drive rebuild. It also points out the inefficiencies of needing to string many randomly-located blocks together to read large files.

It contrasts these with its Caringo CAStor system, which uses commodity x86 servers and JBODs and an object-based storage system (ie, in which whole file are contiguous instead of scattered blocks) in a mirrored configuration that relies on almost constant replication between mirrors and defragging to optimise file locations. It claims much better file access efficiency for small files and much more efficient drive rebuilds than Raid 5 or 6.

The white paper makes some good points about Raid 5 and 6, but loses its punch as it morphs into a marketing tool.

Oh, and if you don’t want to read it, the condensed version is, “Don’t use Raid 5 or 6, use CAStor, which is like Raid 1 but with object-based storage.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: