StorageBuzz

Page 1 of 912345...Last »

April 20, 2017  2:46 PM

Cisco clears the fabric for NVMf as Fibre Channel reaches 32Gbps

Antony Adshead Profile: Antony Adshead

The recent announcement by Cisco of 32Gbps capability in its MDS 9700 Director switch and UCS C-series server products means the two major storage networking hardware makers are now able to offer the next generation of Fibre Channel bandwidth to customers; Brocade already did last summer with its Gen 6 products.

With 768 ports and a maximum bandwidth of around 1.5Tbps, Cisco has targeted potential MDS 9700 customers as those with high bandwidth requirements, such as in virtualised environments, those using flash arrays, and to support NVMe flash storage.

For now they’re unlikely to make a lot of difference to those using NVMe arrays because simply replacing SAS and SATA drives in the array with NVMe drives means there’s still a bottleneck at the storage array controller.

But, Cisco and Brocade now also offer NVMe-over-Fibre Channel fabric connectivity. There aren’t any storage array products that can fully take advantage of NVMf-FC yet, but when they do the potential of flash storage will be hugely boosted.

That’s because NVMe – a PCIe-based protocol – offers huge performance gains for flash storage over existing SAS and SATA connection protocols.

For now mostly, however, NVMe-connected flash drives only realise their full potential in what are effectively direct-attached storage deployments.

Even with NVMf – which allows NVMe to be transported over Ethernet or Fibre Channel – NVMe can’t gain its full potential as back end storage and across a fabric/network as shared storage as we know it.

That’s because the gains brought by NVMe are nullified to a large extent by the storage array controller that provides protocol handling, storage addressing and provisioning, RAID, features such as data protection, data reduction etc.

That is, until storage controllers have sufficient processing power to not be a bottleneck.

But for now at least NVMe-capable drives exist, a transport (NVMf) exists and the two major storage networking suppliers support it. All we’re waiting for are the array vendors.

April 4, 2017  8:51 AM

Elastifile goes to market with a parallel file system for the hybrid cloud

Antony Adshead Profile: Antony Adshead

There are lots of scale-out, parallel file systems about, from those of the big six array makers such as NetApp’s clustered Ontap and EMC’s Isilon OneFS to the open source and distributions thereof of Linux Lustre, Red Hat’s GlusterFS etc.

But we have a new entrant in Elastifile, a software-only startup of Israeli origin that has built a new parallel file system from the ground up that it says can form a single namespace across on-prem and cloud locations. It aims to take on object storage, and in fact uses object representation to allow customers to burst workloads in the cloud.

It aims at traditional secondary storage use cases such as backup and restore but also analytics workloads.

Elastifile says its file system can scale from a minimum three nodes to potentially thousands, although it has only deployed 100. “So far we have found no limitations,” said CEO Amir Aharoni. “But we are working with billions of files. There’s no limit. We assume we can go to 1,000s of nodes.”

Ordinarily, scale-out storage begins to slow up as it reaches very large numbers of files as the tree-like hierarchy of the file system becomes cumbersome. Elastofile execs claim that their file system design distributes metadata so that there are no bottlenecks.

Replication is anything from 2-way upwards. Aharoni says they may add erasure coding at a later date but that this isn’t high on the company agenda because the file system was developed to use data reduction that is suited to replication rather than erasure coding.

The interesting bit is that the Elastofile file system can extend to the cloud where files are represented as native scale-out file for active data or objects when inactive. When a customer wants to burst a workload to the cloud they can “check out” data from that state and run, for example, analytics on it in the cloud in file format. Then when finished they check the data back in to object representation mode.

Aharoni gave an example of a microchip designer that does “lift and shift” to the cloud in that way.

So, for some use cases the aim is access to data that’s not particularly rapid and possibly infrequent and where storage in the cloud would be cost effective.

Aharoni said the company is aiming at scientific analytics, financial services, oil & gas.


March 28, 2017  2:55 PM

Zstor flash JBOF shows the limits of NVMe/NVMf

Antony Adshead Profile: Antony Adshead

German storage supplier Zstor has uprated the JBOD (Just a bunch of disks) for the NVMe flash era and produced the JBOF.

The NV24P allows for up to 24 2.5” NVMe-mounted flash drives of up to 8TB for a total capacity of 184TB.

Connectivity is to up to eight servers and is via what appears to be NVMe over fabrics (NVMf), in this case RDMA (Remote Direct Memory Access), with NVMe drives aggregated using PCIe switching.

So, what the Zstor JBOF provides is direct-attached storage (DAS) for hosts with none of the functionality expected of a shared storage array.

Without giving any exact figures Zstor promises access times similar to those for server-located flash, which must mean in the low hundreds or tens of milliseconds.

The Zstor NV24P illustrates the current limits of NVMe, a PCI-based standard that potentially allows flash storage to break through the limits imposed by SCSI and its use in the disk-era SAS and SATA protocols.

In other words, NVMe is a direct slot-in for PCIe flash in the server with no performance hit. With NVMf, using for example RDMA – a direct connection as if memory – as here, it can also provide near server SSD performance.

But it cannot provide storage addressing and provisioning, RAID, features such as data protection, data reduction etc that have been traditionally provided by a controller.

And so this is effectively a DAS solution, albeit one with uses because to add a controller would lop off the benefits in performance that NVMe brings as it carries out the processing needed to do them.

This controller bottleneck is an obstacle in the road to fulfilling the potential of NVMe in a true shared storage array format.

We await the vendor that can build in the functionality brought by a controller but with the power to overcome any hit to performance.


February 22, 2017  12:00 PM

Big storage revenues face continued decline but for Big Blue things look worse

Antony Adshead Profile: Antony Adshead

The top five in storage continue to see declining revenues, but for IBM storage seems to have been worse than for its competitors.

This week Storage Newsletter published an aggregation of financial results that included the top five storage array makers.

Its findings the following in terms of ranking, with all vendors where comparisons were possible showing a decline in revenues.

#1 – EMC with storage revenue of $16.3 million in 2015 and no equivalent figure for 2016 due to Dell EMC not publishing those figures.
#2 – NetApp, with revenues of $6.123 million in 2015, and $5.546 million in 2016, a decline of 9% year-on-year.
#3 – Hitachi Data Systems, with revenue of $4.079 million in 2015, with no figures for 2016 noted. Revenue decline for 2014 to 2015 was recorded as -4%.
# 4 – HPE, with 2015 revenue of $3.180 and 2016 revenue of $3.065, a decline of 4% year-on-year.
# 5 – IBM, with 2015 revenue of $2.4 million, down to $2.184 in 2016, a decline of 9%.

That’s just a snapshot of what we know already. The “traditional” storage array market is in a period of long-term decline, with hyper-converged, hyperscale and converged systems making good progress. See here for a wider analysis.

An interesting comparison is to put these rankings against those of four years ago.

Here it becomes apparent that a big loser is IBM.

In 2011 we noted the following rankings from Gartner figures:

* EMC: $6.279 billion and 32% market share
* IBM: $3 billion and 14.2%
* NetApp: $2.45 billion and 11.5%
* HP: $2.07 billion and 9.8%
* HDS: $1.99 billion and 9.4%

Aside from EMC’s gathering of an even greater proportion of storage revenues, the big change there is IBM’s drop down the rankings.
Why that happened may be the subject of future blogs, but I’m very happy to hear your views on the subject in the meantime.


February 16, 2017  12:46 PM

NVMe gives “shared DAS” as an answer for analytics; but raises questions too

Antony Adshead Profile: Antony Adshead

Go back 10 or 20 years and direct-attached disk was the norm. IE, just disk in a server.

It all became a bit unfashionable as the virtualisation revolution hit datacentres. Having siloed disk in servers was inherently inefficient and server virtualisation demanded shared storage to lessen the I/O blender effect.

So, shared storage became the norm for primary and secondary storage for many workloads.

But in recent years, we saw the rise of so-called hyperscale computing. Led by the web giants this saw self-contained nodes of compute and storage aggregated in grid-like fashion.

Unlike enterprise storage arrays these are constructed from commodity components and an entire server/storage node swapped out if faulty, with replication etc handled by the app.

The hyperscale model is aimed at web use cases and in particular the analytics – Hadoop etc – that go with it.

Hyperscale, in turn, could be seen as the inspiration for the wave of hyper-converged combined server and storage products that has risen so quickly in the market of late.

Elsewhere, however, the need for very high performance storage has spawned the apparently somewhat paradoxical direct-attached storage array.

Key to this has been the ascendance of NVMe, the PCIe-based card interconnect that massively boosts I/O performance over the spinning disk-era SAS and SATA to something like matching the potential of flash.

From this vendors have developed NVMe over fabric/network methods that allow flash plus NVMe connectivity over rack-scale distances.

Key vendors here are EMC with its DSSD D5, E8 with its D24, Apeiron, Mangstor, plus Excelio and Pavilion Data Systems.

What these vendors offer is very high performance storage that acts as if it is direct-attached in terms of its low latency and ability to provide large numbers of IOPS.

In terms of headline figures – supply your own pinches of salt – they all claim IOPS in the up to 10 million range and latency of <100μs.

That’s made possible by taking the storage fabric/network out of the I/O path and profiting from the benefits of NVMe.

In some cases vendors are taking the controller out of the data path too to boost performance.

That’s certainly the case with Apeiron – which does put some processing in HBAs in attached servers but leaves a lot to app functionality – and seems to be so with Mangstor.

EMC’s DSSD has dual “control modules” that handle RAID (proprietary “Cubic RAID”) and presumably DSSD’s internal object-based file layout. E8 appears to run some sort of controller for LUN and thin provisioning.

EMC and Mangstor run on proprietary drives while E8 and Apeiron use commodity cards.

A question that occurs to me about this new wave of “shared DAS” is: Does it matter whether the controller is taken out of the equation?

I tend to think that as long as the product can deliver raw IOPS in great numbers then possibly not.

But, we’d have to ask how the storage controller’s functions are being handled. There may be implications.

A storage controller has to handle – at a minimum – protocol handling and I/O. On top of that are LUN provisioning, RAID, thin provisioning, possibly replication, snapshots, data deduplication and compression.

All these vendors have dispensed with the last two of these, and mangstor and Apeiron have ditched most of the rest, Apeiron, for example, offloading much to server HBAs and the app’s own functionality.

So, a key question for potential customers should be over how the system handles controller-type functionality. The more processing that is done over and above the fundamentals has to be done somewhere and potentially hits performance, so is there over-provision of flash capacity to keep performance up while the controller saps it?

Another question is, despite the blistering performance possible with these shared NVMe-based DAS systems, will it be right for leading/bleeding edge analytics environments?

The workloads aimed at – such as Hadoop but also Splunk and Spark – are intensely memory hungry and want their working dataset all in one place. If you’re still having to hit storage – even the fastest “shared” storage around – will it make the grade for these use cases or should you be spending money on more memory (or memory supplement) in the server?


February 6, 2017  12:26 PM

Zadara finds storage managers’ #1 wish is a scalable cloud

Antony Adshead Profile: Antony Adshead

There was a dodgy* old joke about a glass of beer that re-filled itself when you had drunk it. The unwritten premise was that that’s what everyone (well, men in the 1970s, I presume) would want if they could get it.

But, what would storage managers wish for if they could get it? Something similar, one would think, given the ever-present headache of data growth. IE, storage capacity that is easily scalable, usually upwards, but downwards too when you need it. IE, cloud storage.

Well, Zadara Storage – which makes software-defined storage that can be used in the cloud – asked 400 people who manage storage in the UK, US, and Germany what their #1 wish in 2017 would be.

The largest chunk (33.25%) answered “cloud storage that scales up or down according to my organisation’s needs”, with no appreciable difference in results between the three countries.

That wish was expressed by about three times more people than opted for “new storage hardware to hold my organisation’s data”, although there was a significant difference between the two sides of the Atlantic here, with 10% of UK and Germany respondents desirous of more in-house capacity while 16% of those in the US wanted more hardware.

The main takeaway, I think, is that easily scalable storage is the key thing storage managers want.

Perhaps more profound is the assumption that that can only be found in the cloud.

This looks like a harbinger of things to come and that the rise of the cloud is inevitable.

Currently, a lack of guarantees over latency and availability (no-one can guarantee against a cable getting dug up, for example) mean the cloud is becoming more popular but is not trusted for the most performance-hungry storage operations.

Despite that, the survey tells us customers want storage that can scale easily and that the cloud is where it will likely come from.

In the long term this will bring major changes in data storage, with cloud providers profiting from economies of scale in terms of buying storage hardware, and with capacity delivered via the cloud for all but perhaps the most sensitive workloads.

Footnote

* The joke, as I remember it, told in Britain in the 1970s, took a poke at the Irish. In it, an Irishman was offered three wishes. He asked for a glass of beer that re-filled itself. For his second wish, he asked for another.

Given the issues around data portability between cloud providers the joke could be successfully adapted to one about a storage manager offered cloud storage capacity that scaled itself. IE, you’d be crazy to want another one.


January 23, 2017  12:08 PM

Storage predictions 2017: Hyper-converged to march on, and get NVMe

Antony Adshead Profile: Antony Adshead

Hyper-converged infrastructure has been a rapidly rising star of the storage and datacentre scene in the past year or two.

And it is predicted by Gartner that that rise will continue, with hyper-converged product sales set to more than double by 2019 to around $5 billion.

As that happens, according to the analyst house, hyper-converged infrastructure will increasingly break out from hitherto siloed applications and become a mainstream platform for application delivery.

Part of that evolution surely has to be the inclusion of NVMe connectivity, which should be a shoo-in for hyper-converged.

NVMe is another rapidly rising storage star. It’s a standard based on PCIe, and offers vastly improved bandwidth and performance to drives than existing SAS and SATA connections.

In other words it’s a super-rapid way of connecting drives via PCIe slots.

NVMe has so far been most notably been the basis for EMC’s DSSD and startup E8’s D24 but has not apparently been widely taken up by hyper-converged market players.

Surely though, widespread adoption is just a matter of time. With its PCIe connectivity, literally slotting in, NVMe offers the ability to push hyper-converged utility and scalability to wider sets of use cases than currently.

There are some vendors that focus on their NVMe/hyper-converged products, such X-IO (Axellio), Scalable Informatics, and DataON, but NVMe as standard in hyper-converged is almost certainly a trend waiting to happen.


January 5, 2017  1:29 PM

Violin memory: Reasons behind the demise of an all-flash pioneer

Antony Adshead Profile: Antony Adshead

In these days when flash storage grabs so many headlines for the right reasons it’s quite a surprise that one of the pioneers in the space – Violin Memory – is about to hit bankruptcy hard, with an upcoming fire sale of its remaining assets.

Violin was – and still is – widely regarded as having a good set of products, based around all-flash arrays built around custom silicon and with an emphasis on performance and more recently adding advanced storage features that enterprise customers needed.

In September 2013, Violin was, according to Gartner, the market leader in all-flash arrays, with a 19% share.

At its IPO in September 2013 the company raised $162m, selling shares for $9 initially, but saw them fall to $7 on the same day and to $3 by the end of November that year. Investors are thought to have been scared off by how quickly Violin had used its cash reserves.

But in 2014, Violin gained a new CEO and sold off its PCIe flash card business. In June that year, it upgraded its all-flash arrays – re-branding the product the Concerto 7000 series – with the addition of advanced storage functionality.

It released the Flash Storage Platform (FSP) in February 2015, with additional models in December 2015. Then in September 2016, almost as a last throw of the dice, Violin Memory added two new FSP all-flash arrays, aimed at extreme high performance and general workloads.

The numbers that track Violin’s decline are quite shocking.

In 2012 Violin could boast $72 million in revenue. By mid-2016 that was down to $7.5 million for the previous quarter and the company had lost $603.3 million during its history, including $99.1 million in 2015. In 2016 Violin lost $25.5 million, $22.2 million and $20.1 million over the first three quarters.

From January 2014 Violin reduced headcount from 437 employees to 82 by the end of 2016, through several staff reductions.

By the end of 2016 Violin’s reserves were down to $31.4 million in cash and equivalents and it had filed for Chapter 11 bankruptcy protection. By the end of December Violin had $3.62 million in cash, with that total dropping to around $1.6 million by the January 13 planned date for the auction of its assets.

So, what went wrong?

The big, obvious answer is that it spent more than it earned, did not get new customers at a quick enough rate and failed to cut costs quickly or effectively enough to stem an increasingly worsening situation.

And then, while in a situation of declining fortunes Violin suffered further as a result of the market’s perceptions of its decline, and that was only fuel to the fire/a vicious circle etc.

It’s quite some decline. Clearly, as all-flash market leader in 2012/2013 the market it served gave it a resounding thumbs up.

Then investors also flocked to throw $9 a share at it in its IPO. Those shares are now worth a paltry 4c each.

While clearly possessing a good set of products and tens of US patents, the market it did so well in initially arguably became a niche one.

This was, namely the early all-flash array market, where high performance was married with high cost and boxes lacked the features enterprises expected in their general storage, such as synchronous replication, CDP, snapshots etc.

And that didn’t matter because businesses with extremely high performance requirements were happy to throw cash at what was effectively a point solution for their most valuable datasets.

But the all-flash market changed.

All-flash arrays gained advanced storage features and hybrid flash became the most common method of flash deployment. The flash storage array market became less one of point solutions and spread to encompass general, mixed workload, primary storage needs. The big six in storage and many of the startups successfully provided this.

Meanwhile, Violin just didn’t seem to be able to keep up, at least not without incurring crippling losses.

Now all that remains to be seen is what will emerge from the auction sale of the remains of Violin later this month.


November 16, 2016  2:49 PM

Flash is costly and will never achieve price parity with HDD: Infinidat CEO

Antony Adshead Profile: Antony Adshead

Flash storage is over-rated. Well, if not over-rated, then certainly more costly than many would have us believe and price parity with spinning disk is a very, very long way off.

Those are the views of storage industry veteran (billed as Mr Symmetrix in the PR puff) and Infinidat CEO Moshe Yanai.

“The main problem with flash is the cost,” said Yanai this week. “Even now, Seagate has introduced a 16TB flash drive, but it costs $8,000 to $9,000. You can get the same capacity spinning disk drives for 1/50th of the price.

He added: “Flash is coming down in price but so are HDDs. If people need to keep more data, to do it with flash would make them bankrupt.”

According to Yanai the claims made by flash vendors are spurious and don’t compare meaningfully between product categories.

He said: “People who sell flash say it’s cheaper but they compare it to the cost of 15,000rpm spinning disk when a system based on 7,200rpm HDDs costs 10 times less than 15k.”

“There is no price crossing point coming, not in the next 10 or 20 years; ask people who product disk and flash!”

Just so we have the full disclosure bit, though, Yanai’s Infinidat F-series storage arrays scale to petabytes and are based on very large numbers of nearline-SAS spinning disk with triple controllers, relatively big memory modules and flash for working datasets.

The secret sauce in the Infinidat software allows rapid access to data via striping of small data blocks (64kb) across all drives in an array’s enclosures.

That means, said Yanai, that if the array needed to go to spinning disk for data you could have all 480 HDD drive actuators working at once to seek and read data (assuming that’s how many drives you had fitted). Meanwhile, data writes are held in memory and written sequentially so as not to get in the way of reads.


October 26, 2016  12:44 PM

Brisk growth for storage in China; and a who’s who of Chinese data storage companies

Antony Adshead Profile: Antony Adshead

It’s interesting to take a look at IDC’s recently-released market tracker figures for external disk storage systems for China.

Interesting in two senses: To see who the key players are, and that largely they’re a different set of storage vendors to those we’re used to, and; to see how the market is progressing there in terms of growth.

The headline facts are that in Q2 of 2016 the China external disk storage systems market was worth $527 million and that was up 7.2% year-on-year (although most growth was in the mid-market, with low- and high-end segments seeing declines).

The top five storage vendors were Huawei (21%), followed by EMC (10%), then Hikvision (8%), Inspur (6%) and IBM (6%), while “others” accounted for 49% of the market.

Inspur was a new entrant, according to the report, thanks to its work with government. Other notable growing storage companies mentioned included Sugon, Macrosan and Tongyou, where government and education were cited as drivers.

For comparison IDC’s worldwide external enterprise storage systems saw Q2 2016 revenue of $5.66 billion and this registered no growth year-on-year.

Top five vendors worldwide were EMC (28%), tied at two HPE and NetApp (10.5%), then IBM (9.5%), and tied at five Dell and Hitachi (7.5%), with “others” accounting for 27%.

China’s storage growth is largely in line with GDP growth of around 6% to 7%, which is seen as possibly unsustainably high. IDC’s identification of government and education as drivers falls in line with Chinese government policy of spending and lending.

A stark fact also is that most of the China storage market appears to be wrapped up by local vendors, with the US-based big six mostly struggling to register market share.

So, who are the key Chinese storage vendors?

Huawei – Probably the best-known of the Chinese hardware vendors, it’s a giant company with turnover in the several billions of dollars range and a presence globally. It made it’s name in telecoms equipment and networking, but its Oceanstor storage products include storage switches and storage arrays that go from entry-level/mid-range unified (iSCSI and Fibre Channel) to enterprise storage in the tens of petabytes range. Huawei and software-defined storage maker Datacore inked a deal last year to bring hyper-converged appliances to market.

Hikvision – Mostly a vendor centred on video camera (CCTV, surveillance etc) technology but with a couple of unified storage (iSCSI and NAS) boxes in its product range. It’s quite remarkable that it gets into a storage top five on such a limited storage product family.

Inspur – This company has majored on servers and has seen Microsoft investment and agreements with VMware. On the storage front it appears to focus on Fibre Channel enterprise-class hardware with a three-model range with features including synchronous replication and with capacities that scale to petabytes.

Sugon – Another company with VMware connections (some investment flowing to the Chinese company last year) and a background in HPC/super-computers, it also majors in servers but has a storage product range that stretches to (under the Dawning brand) a single storage server (so probably NAS) that scales to petabytes.

Macrosan – The only storage specialist among this crop. Its products range from entry-level NAS and surveillance-oriented boxes to high-end, fully-featured arrays that can be equipped with flash. Although called MacroSAN none of the product specs on the website mention blocks storage iSCSI or Fibre Channel as protocols supported, thought I guess this may be an oversight given the scale the products seem to go to.

Tongyou – Appears to be a brand name of Toyou Feiji. There appears to be no company website to showcase products, but Bloomberg says it, “researches, develops and applies data storage and protection, disaster recovery and related technologies. The Company’s main products include disk storage systems, storage management softwares, data storage solutions, and technical services.”


Page 1 of 912345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: