StorageBuzz

Page 1 of 1012345...10...Last »

December 6, 2017  11:56 AM

Hitachi Vantara: Storage with an Internet of Things advantage?

Antony Adshead Profile: Antony Adshead

Hitachi Data Systems is no more.

It has been rolled into a new division, Hitachi Vantara. That is, HDS, with its largely enterprise-focussed data storage products has been joined with the Internet of Things-focussed Hitachi Insight and the analytics arm, Pentaho.

The premise for the move is that we on the verge of a world in which data from machines will become increasingly important. So, potentially large amounts and varying types of data will need to be stored. And there is no question that to get the most from that data there will be a pressing need to make some sense of it via analytics.

That’s more or less the explanation of Steve Lewis, CTO of Hitachi Vantara, who said: “The reality for a lot of companies – and the message hasn’t changed – is that they are required to manage data in increasingly efficient ways. There will be more and more machine-to-machine data being generated and the questions will be, how do we store it, how long do we keep it, what intelligence can we gain from it?”

Hitachi Vantara would appear to be in a prime position to profit from an IoT future. It’s a small part of a vast conglomerate built mostly on manufacturing businesses whose products range from electronics to power stations via defence, automotive, medical and construction equipment, but also includes financial services.

That background should provide almost unique opportunities to develop data storage for a world of machine data and intelligence gathering therefrom.

Will there be any impacts on the datacentre and storage in particular?

Lewis said: “Storage will continue on the same trend with the growth of data volumes and the need for different levels of performance.”

“But, for example, where companies used fileshare as a dumping ground and didn’t know what they had, increasingly organisations need to know what data they hold, the value of it and make more use of metadata. ‘Metadata is the new data’, is something we’re hearing more and more.”

Lewis cited the example of the Met Police’s roll out of 20,000 body-worn cameras and the effects – with several GB of data per 30 minutes of video – on their networks (“never designed for video content”), on storage volumes, but also the need to store that data for long periods (100 years in the case of the police), be able to find it, make sense of it and delete it when required.

“So, it’s all less about initial purchase price and more about the cost of retention for its lifetime,” said Lewis.

Clearly, Hitachi Vantara aims to profit from these type of needs and plans to, said Lewis, “Develop its own IoT framework and operating environment.”

It should be in a good position to do this. Time will tell.

November 21, 2017  2:45 PM

Could Blockchain power private distributed cloud storage?

Antony Adshead Profile: Antony Adshead

Data storage has many fundamentals, but a key one is the idea that what we store should be or form part of a single, reliable copy.

This is what is being strived for in concepts such as the file system, with its locking mechanisms to ensure the integrity of a file as it is worked on, for example.

We know this is not practically achieved more widely and that multiple versions of files proliferate across corporate storage systems, via emails, the internet etc.

But, in some use cases it is absolutely essential that there is a single version of the truth, for financial transactions or in areas such as health records.

There are also good economic reasons to want to keep single copies of data. It’s simply cheaper than unnecessarily holding multiple iterations of files.

Enter Blockchain, which provides a self-verifying, tamper-proof chain (sharded, encypted and distributed) of data that can be viewed and shared – openly or via permissions – and so provides a single version of the truth that is distributed between all users.

It has, therefore, key qualities sought in providing storage. The Blockchain itself is in fact stored data, though practically it’s not a great idea to chain together more than small blocks because the whole chain would become too unwieldy.

So, it’s not storage as we know it. However, but some startups have started to apply Blockchain to storage.

Sia and Storj allow customers to store data on other people’s hard drives in distributed cloud storage protected via Blockchain.

These services allow “farmers” to offer spare hard drive capacity in return for cryptocurrency. The actual data is sharded and encrypted in redundant copies and protected by public/private key cryptography.

The Blockchain stores information such as shard location, that the farmer still has that shard and it is unmodified.

Storj and Sia offer storage at something like one tenth the cost of Amazon S3 because there are no datacentres to maintain.

Meanwhile, Blockchain has been brought into use to manage the storage of health records.

Like Storj and Sia, the data isn’t actually held on the chain, but is referenced and protected from it. Already, Estonia’s entire health records repository is kept this way.

There are other limited or private use cases too, such as backup product maker Acronis, which uses Blockchain to verify data held in its Acronis Storage cloud.

All this points in the direction of a potentially useful secondary/nearline storage use cases based on Blockchain. An organisation could make use of unused storage capacity around the datacentre in the manner of Storj or Sia and so achieve much better utilisation.

There may be products out there already that do this, and I’m sure their representatives will let me know, if they exist.

Meanwhile, there are much grander future scenarios based on Blockchain in development such as BigChainDB’s Inter-Planetary Database, that aims to be the database for the “emerging decentralized world computer”.

Somewhere down the line the Storj/Sia model could be universally applied to public cloud storage, but for now – given concerns over bandwidth and security in the public cloud – distributed private cloud based on Blockchain management would be a reasonable target.


October 12, 2017  12:16 PM

Backup gets cloudier, edges closer to on-prem/cloud interchangeability

Antony Adshead Profile: Antony Adshead

There was a time not too long ago when backup software pretty much only handled one scenario, ie backing up physical servers.

Then came virtualisation. And for quite some time the long-standing backup software providers – Symantec, EMC, IBM, Commvault et al – did not support it, while newcomers like Veeam arose to specialise in protecting virtual machines and gave the incumbents a shove.

Then we had the rise of the cloud, and initially in backup products this was an option as a target.

But as the cloud provided a potential off-site repository in which to protect data it also became the site for business applications as a service.

That meant the cloud became a backup source.

There is some data protection capability in the likes of Office 365 but this doesn’t always fulfil an organisation’s needs.

There’s a the risk of losing access to data via a network outage, and there are compliance needs that might require, for example, an e-discovery process. Or there’s simply the need to make sure data is kept it in several locations.

So, companies like Veeam now allow a variety of backup options for software services like Office 365.

You can, for example, use Veeam to bring data back from the cloud source to the on-prem datacentre as a target. That way you can run processes such as e-discovery that would be difficult or impossible in the cloud application provider’s environment.

Or you can backup from cloud source to cloud target. This could be to a service provider’s cloud services, or to a repository built by the customer in Azure, AWS etc. Either option might enable advanced search and cataloguing to be made easier, or might simply provide a secondary location.

With the possibility of backup of physical and virtual machines in the datacentre and the cloud and then spin-up to recover from any of these locations, full interoperability between environments is on the horizon.

For now the limits are beyond those of the backup product, assuming it has full physical-to-virtual interoperability, but are those of the specific scenario. A very powerful dedicated physical server running high performance transactional processing for a bank, for example, could likely not be failed over to the cloud.

But nevertheless, the trends in backup indicate a future where the site of compute and storage can slide seamlessly between cloud and on-prem locations.


October 4, 2017  2:15 PM

Veritas reinvents itself, adding intelligence to data protection

Antony Adshead Profile: Antony Adshead

Veritas dates back to the early 1980s but disappeared for 10 years to become part of Symantec, with its NetBackup and Backup Exec leading the way in the data protection market.

But, in 2014 Veritas was burped out by Symantec into an environment where its leadership in backup could no longer be taken for granted.

The virtualisation revolution had transformed the likes of Veeam into mainstream contenders while newcomers and also-changed rivals such as Rubrik, Cohesity and ArcServe started snapping at heels.

And while the virtualisation transformation had largely done its work, other long waves started to break.

These were: The drive towards big data and analytics, which is also being driven by the upsurge of machine/remote data; a greater need for compliance, driven in particular by regulations such as Europe’s GDPR, and; the emergence of mobile and the cloud as platforms operating in concert/hybrid with in-house/datacentre IT environments.

Such changes appear to have driven Veritas to focus on “broad enterprise data management capabilities”, according to EMEA head of technology, Peter Grimmond.

According to Grimmond, Veritas’s thinking centres on four aims, namely: Data protection, ie backup and archiving; data access and availability, ie ensuring the workload can get to the data; gaining insight from the organisation’s data, and; monetising that data if possible.

Its product set fits with those general aims, with data protection and availability products (NetBackup, Backup Exec, plus hardware backup appliances); software-defined storage products (file and object storage),and; tools to help with information governance (data mapping and e-discovery tools, for example).

Compliance and the kind of data classification tasks that arise from it are strong drivers for Veritas right now.

“We are particularly focussed on unstructured data and how that can pile up around the organisation,” said Grimmond. “And whether that is a risk or of value to the organisation.”

That’s of particular use in, for example, any kind of e-discovery process, and as part of regulatory requirements such as for Europe’s GDPR. This gives the customer the “right to be forgotten” following a transaction, which for organisations can mean it needs to locate personal data and do what is necessary with it.

Veritas has also built in intelligence to its storage products. Its object storage software product – announced recently at its Vision event – for example, incorporates its data classification engine so that data is logged, classified and indexed as it is written.

This functionality has in mind, for example, Internet of Things and point-of-sale scenarios, said Grimmond.


September 19, 2017  12:34 PM

Barracuda to adapt cloud orchestration to cloud disaster recovery

Antony Adshead Profile: Antony Adshead

Barracuda makes physical and virtual backup and archive appliances and has embraced the cloud as a target, as well as cloud-to-cloud backup and archiving for services such as Office 365.

But it is now working on adapting cloud orchestration to enable customers to build their own cloud disaster recovery.

DR in the cloud is available. Options exist that range from simply using cloud backup and recovering from that to customer infrastructure on-prem or in the cloud, to full-featured DraaS offerings.

“We’re looking at how to leverage the public cloud to do rapid recovery,” said Alon Yaffe, product management VP at Barracuda.

“The way cloud disaster recovery exists, the industry is asking customers to go with a certain vendor for everything,” said Yaffe. “But, there will be advantages for customers to make use of the public cloud and do it on their own.”

If they do that though, how does Barracuda benefit? After all, it makes its living providing services and products in this sphere.

According to Yaffe it will be by offering the intelligence to help orchestrate disaster recovery that customers put together using public cloud services. The company will be working on that in the next couple of years, said Yaffe, with the aim of providing orchestration for on- and off-site DR functionality.

That presumably means the type of orchestration that can allocate and provision data and storage – within the bounds of RTOs and RPOs – and make it available following an outage, in a fully access-controlled fashion so that customers can build DIY disaster recovery infrastructure from a mix of public cloud and on-prem equipment.

It’s an adaptation of the idea of cloud orchestration to the sphere of DR and should be a valuable addition to the datacentre.


September 11, 2017  1:44 PM

Rubrik plans backup and archive analytics on-prem and in cloud

Antony Adshead Profile: Antony Adshead

Backup appliance maker Rubrik plans to add analytics to its products, including in the cloud.

Talking to ComputerWeekly.com this week CEO Bipul Sinha would not give details, but did say the company plans to add analytics, and not restricted to those that report on backup operations but more widely using metadata captured in backup and archive operations.

“To date what Rubrik has done has been to manage data backup, recovery, archiving. Going forward we’re looking at more analytics and reporting, doing more with the content stored,” he said.

Sinha he felt Rubrik had won customer trust with its scale-out appliance offerings and that now the company, “wanted to give more intelligence” and that its analytics would enable customers to “interrogate data to gain useful business information.”

The Rubrik CEO also said: “There’s a definite trend to making one single platform on premises and across the cloud” and said that any analytics functionality offered by the company would span the two.

“Competing legacy companies have not innovated so it’s breaking new ground,” he added.

That’s not strictly true, as Druva claims e-discovery and data trail discovery functionality with its inSynch product.

And backup behemoth Veritas recently added functionality that uses machine learning to ID sensitive and personal data to help with GDPR compliance.

To date though, the extent of analytics functionality in backup products has been limited, and some question to what extent backup and analytics can be merged, so we’ll have to wait and see what Rubrik comes out with.

Rubrik provides flash-equipped backup appliances that can scale out and which support most physical and virtual platforms, including the Nutanix AHV hypervisor.


August 29, 2017  2:14 PM

Is hyper-converged the answer to the NVMe bottleneck?

Antony Adshead Profile: Antony Adshead

NVMe offers huge possibilities for flash storage to work at its full potential, at tens or hundreds of times what is possible now.

But, but it’s early days, and there is no universally-accepted architecture to allow the PCIe-based protocol for flash to be used in shared storage.

Several different contenders are shaping up, however. We’ll take a look at them, but first a recap of NVMe, its benefits and current obstacles.

Presently, most flash-equipped storage products rely on methods based on SCSI to connect storage media. SCSI is a protocol designed in the spinning disk era and built for the speeds of HDDs.

NVMe, by contrast, was written for flash, allows vast increases in the number of I/O queues and the depth of those queues and enables flash to operate at orders of magnitude greater performance.

But NVMe currently is also roadblocked as a shared storage medium.

You can use it to its full potential as add-in flash in the server or storage controller, but when you try to make it work as part of a shared storage setup with a controller, you start to bleed I/O performance.

That’s because – consider the I/O path here from drive to host – the functions of the controller are vital to shared storage. At a basic level the controller is responsible for translating protocols and physical addressing, with the associated tasks of configuration and provisioning of capacity, plus the basics of RAID data protection.

On top of this, most enterprise storage products also provide more advanced functionality such as replication, snapshots, encryption and data reduction.

NVMe can operate at lightning speeds when data passes through un-touched. But, put it in shared storage and attempt to add even basic controller functionality and it all slows down.

Some vendors, for example, Pure in its FlashArray//X, have said to hell with that for now and put NVMe into their arrays with no change to the over all I/O path. They gain something like 3x or 4x over existing flash drives.

So, how is it proposed to overcome the NVMe/controller bottleneck?

On the one hand we can wait for CPU performance to catch up with NVMe’s potential speeds, but that could take some time.

On the other hand, some – Zstor, for example – have decided not to chase controller functionality, with multiple NVMe drives offered as DAS, with NVMf through to hosts.

A different approach has been taken by E8 and Datrium, with processing required for basic storage functionality offloaded to application server CPUs.

Apeiron similarly offloads to the host, but to server HBAs and application functionality.

But elsewhere, controller functionality is seen as highly desirable and ways of providing it seem to be focussing distribution of controller function processing between multiple CPUs.

Kaminario’s CTO Tom O’Neill has IDed the key issue as the inability of storage controllers to scale beyond pairs, or even if they can nominally, to actually become pairs of pairs as they scale. For O’Neill the key to unlocking NVMe will come when vendors can offer scale-out clusters of controllers that can bring enough processing power to bear.

Meanwhile, hyper-converged infrastructure (HCI) products have been built around clusters of scaled-out servers and storage. Exelero has built its NVMesh around this principle, and some kind of convergence with HCI could be a route to providing NVMe with what it needs.

So, with hyper-converged as a rising star of the storage market already, could it come to the rescue for NVMe?


June 28, 2017  10:14 AM

NVMe’s predicted ascendancy clouded by architectural hurdles

Antony Adshead Profile: Antony Adshead

More than 70 vendors will be involved in the NVMe flash market by 2020 and the market will be worth $57 billion. Meanwhile, nearly 40% of all-flash arrays will be based on NVMe drives by 2020.

That’s according to research company G2M, which has predicted a compound annual growth rate for NVMe-based products of 95% per annum between 2015 and 2020.

But how accurate can those figures be?

The research predicts NVMe will make inroads across servers and storage hardware but also as storage networking equipment to speed NVMe across the likes of Fibre Channel and Ethernet (in so-called NVMe-over-fabrics).

The G2M report reckons more than 50% of enterprise servers will have NVMe bays by 2020, while that will be the case for 60% of enterprise storage appliances.

Meanwhile, it predicts “nearly 40%” of all-flash arrays will be NVMe-based by 2020.

There’s an interesting distinction here that reflects the current difficulties of realising the potential of NVMe flash. Namely, “NVMe bays” on the one hand and “NVMe-based arrays” on the other.

Because NVMe so radically re-works storage transport protocols it is currently held back by storage controller architectures, so cannot realise NVMe’s potential of tens or hundreds of x better performance than current SAS and SATA-based flash drives.

NVMe can slot in and realise its full potential as direct-attached storage – lending it to use in server and storage “NVME bays” – and some vendors have delivered what are effectively direct-attached arrays that lack features such as provisioning, RAID configuration, replication etc.

But the storage controller that must handle protocols, provisioning and more advanced storage functionality currently forms a bottleneck to NVMe use in storage arrays and so we are yet to see a true NVMe flash array hit the market.

So, when the G2M report predicts 40% of flash arrays being NVMe-based by 2020 it it very well as best guess. A guess that hedges it bets on perhaps, but not certainly, that some arrays will have cracked the NVMe-controller bottleneck, and/or that there will also be a number of products that run NVMe in less-than-optimum architectures and that there will also be some “arrays” that are effectively banks of direct-attached storage.

Nevertheless, the research is interesting and shows the perceptions that exist around the potential for NVMe flash.


June 1, 2017  1:11 PM

The brain as a model for data storage

Antony Adshead Profile: Antony Adshead

The future of data storage will not be in the binary switching of electrical cells as in flash storage.

It may also not be in magnetism-based potential successors to flash such as Racetrack Memory, where one or a few bits per cell is replaced by 100 bits per physical unit of memory.

Instead, scientists are working on ways to mimic the way the human brain stores memories.

So far, things are still at the stage of trying to work out exactly how the brain does what it does, but the potential gains to be had of mimicking it are huge.

Current estimates are that the brain has a storage capacity of possibly several petabytes. Also, according to Professor Stuart Parkin, an experimental physicist, winner of the Millennium Prize in 2014 and IBM fellow, it is estimated the brain uses one million times less energy than silicon-based memory.

Science is still to come up with anything like a consensus for how memories are stored in the brain. It is thought – to simplify hugely – that it is the release and uptake of neurotransmitting chemicals (of different types) between brain cell (of different types, such as neurons) that are the vehicle for memories.

And that – a network of connections between which storage is shared and those connections are what defines the thing being stored – is the model for research by Professor Parkin, who is also director of research centres at Stanford University in the US and the Max Planck Institute at Halle in Germany.

He said: “What we’re looking to do is go beyond charge-based computing. We could be inspired by how biology computes, using neurons and sysnapses, with data stored in a distributed fashion and currents of ions manipulating information.”

“We believe the brain stores data by distributing it among synaptic connections,” he added. “We want to build a system of connections and learn how to store information on it.”

“That’s in contrast to how we do things now with individual devices. Instead it would be a network of connections and distributed storage of information among them, but built in a totally different way with, say, 1 bit of information stored between 20,000 different connections.”

Nominations are now open for The 2018 Millennium Technology Prize (also known as the Technology Nobels).


May 8, 2017  10:52 AM

Violin Memory: How will new CEO change its fortunes?

Antony Adshead Profile: Antony Adshead

Violin Memory – an all-flash storage pioneer whose trials and tribulations we have followed here – finally hit what just about rock bottom for a tech company in the past months.

That is, Violin was declared bankrupt and in January was auctioned off over a period of three days with the winner at $14.5 million an investment vehicle of the Soros Group, Quantum Partners.

Violin had been a pioneer of flash storage and had raised $162 million at its IPO in 2013. But by last year it had lost in excess of $600 million and was heading for an unseemly demise.

In early 2017 it was set for the aforementioned fire sale, of which Quantum Partners was the winner.

One must presume that the investors believe they can turn the company around.

They have appointed a new CEO, namely Ebrahim Abbassi. He comes with credentials trumpeted by Violin of having rescued three other tech outfits from the doldrums, namely Redback (turned around and acquired by Ericsson in 2006), Force 10 (acquired by Dell in 2011) and Roamware (now Mobileum).

Mr Abbassi seems to have a record of taking companies and preparing them for acquisition, so maybe that’s what Quantum Partners hopes for with Violin.

He has been at Violin for more than a year now, as COO from March 2016 before landing the top job at the end of April 2017.

Was he not able to start the turnaround sooner? Perhaps what was needed was to rid the company of debts via bankruptcy and a sweep-out of the board. That was announced, with the presence of a couple of software experts but no hardware-experienced execs.

That might indicate the directions the company will take; restructure with a bias towards software innovation and aim for a profitable acquisition.

It’s arguable that Violin’s focus on its proprietary hardware flash modules was always a potential weakness. Perhaps it is even more so now as software-defined, hyper-converged and NVMe-based approaches are de rigeur.

It’ll be interesting to see what can be done with Violin. A potential competitor/acquisition target or destined ultimately for the Where Are They Now file?


Page 1 of 1012345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: