Storage Soup


May 30, 2014  12:50 PM

Nimble CEO sees no need for all-flash array

Dave Raffo Dave Raffo Profile: Dave Raffo
Nimble Storage, Storage

Surrounded by large and small storage competitors diving into the all-flash array market, Nimble Storage CEO Suresh Vasudevan said his company is at least three years away from taking such a step.

Vasudevan said Nimble’s CASL file system will support an all-flash array, but market demand is not yet there because of pricing dynamics.

He said Nimble meets the requirements for all flash, with features such as scale-out, data reduction and file locking. But he said Nimble’s hybrid systems have enough performance to compete with all-flash arrays without the cost.

“I will say the architecture itself is broad enough to enable us to go towards the flash-only array,” Vasudevan said Thursday on Nimble’s earnings call.

“The more difficult question to answer is when or whether we think it will become necessary. That question entirely revolves around the endurance of flash coupled with the price of flash. Will the price of flash go down without compromising endurance to a point where the economics start to favor an all flash array? At this junction, not even the semiconductor industry will give you a clear answer that says endurance will stay roughly where we need it to be for enterprise flash arrays and price will go down. So that’s the big unknown. What I am sure of is it is not happening in the next three to five years.”

Nimble’s CS200 and CS400 iSCSI arrays all combine flash and spinning disk. The vendor will also launch a higher end system that includes what Nimble calls Adaptive Flash in June.

Nimble is competing well as is, with revenue of $46.5 million last quarter more than doubling over the previous year as its larger competitors saw their revenue shrink. The vendor exceeded its revenue goal and added 450 new customers in the quarter. The forecast for this quarter is for $49 million to $51 million in revenue.

Nimble also continues to lose money while in growth mode (76 employees added last quarter to bring the total to 668). It lost $10 million last quarter and expects to lose between $11 million and $12 million this quarter. Vasudevan said he doesn’t expect to break even for nearly two years.

Despite the losses, Vasudevan said Nimble is moving into larger companies. He said the vendor had 400 deals of more than $100,000 over the last year, twice as many as the previous year. The addition of Fibre Chanel support planned for later this year should also help Nimble move into the enterprise.

Established storage companies are apparently taking Nimble seriously now. Vasudevan said price competition is “more intense” as large vendors fight harder for deals. “I would say that’s the one change versus the large incumbents,” he said when asked if large vendors are getting more aggressive on pricing.

May 29, 2014  11:40 AM

Unitrends buys startup Yuruware for cloud backup

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud Backup, Unitrends

Unitrends today added cloud backup technology to its data protection product line, which already includes physical backup (Unitrends integrated appliances), virtual backup (PHD Virtual) and disaster recovery failover, failback and testing (ReliableDR).

The cloud backup comes from Yuruware, an early stage Australian-based startup that Unitrends acquired today.

Yuruware technology is months away from shipping, but will be built into the ReliableDR software that PHD Virtual acquired from VirtualSharp last year before Unitrends acquired PHD Virtual. ReliableDR allows customers to replicate between sites or into a Unitrends cloud. ReliableDR technology has been integrated into PHD Virtual and Unitrends backup products, and Yuruware technology will extend its capabilities to public clouds such as Amazon Web Services.

“With our backup products today, we have the ability to replicate between two end points,” said Mark Campbell, Unitrends chief strategy and technology officer. “We start running into problems with public clouds when we have to transform a VMware machine into an Amazon machine. We also have problems around the network because VMware has a virtual network and Amazon has its own network [programming] language known as CloudFormation. Yuruware can spin up machines and networks from a local infrastructure into the cloud.”

Campbell said Yuruware has technology similar to Amazon’s pilot light pattern that replicates data that resides outside of a critical database but must be failed over and back to recover from a disaster.

“Yuruware has technology that does that but doesn’t use compute resources [such as Amazon EC2],” he said. “It only uses compute cycles when you want to assemble those objects into an Amazon Machine Instance.”

Unitrends acquired Yuruware out of the Australian incubator NICTA. The startup consists of seven developers and patented intellectual property, which Unitrends plans to expand into full-blown public cloud backup. Campbell said Unitrends will keep the Yuruware team in Australia and add developers.

The roadmap includes expanding Yuruware’s current support for AWS and OpenStack to other public clouds, including Microsoft Azure.

Campbell said Yuruware IP is expected to show up in Unitrends products by the end of this year, with more to come in early 2015.


May 27, 2014  3:21 PM

dinCloud offers no transfer fees with its new cloud service

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Cloud storage, dinCloud, Storage

Cloud service provider dinCloud is taking on Amazon, hoping to lure customers away with its own dinStorage S3 full-service cloud storage that is based on the Amazon S3 open APIs and can be used for S3 compatible applications.

The company recently launched its dinStorage S3 service that charges no transfer fees, which is one of the main “gotchas” of a cloud service. dinCloud also is offering 10 Gigabit Ethernet (GigE) to 40 GigE speeds, dedicated networking with AES256 encryption and multiprotocol label switching (MPLS) connectivity with 15 carriers that include AT&T and Sprint.

dinCloud offers unlimited storage capacity and charges on a per gigabyte, per month basis.

“These guys are going for high-end cloud use cases and going after Amazon’s biggest customers,” said James Bagley, senior analyst at Storage Strategies Now. “It’s a cost play. They are providing lower cost for volume users. These are companies that are well experienced in cloud computing. What struck me is they are really going for it. They have the network and infrastructure wherewithal to pull it off.”

The Los Angeles-based dinBackup began offering a data protection cloud service in November 2012. That service operated on NetApp and was available for NetApp customers looking for backup and disaster recovery. dinCloud has since discontinued the service, all though it still supports some NetApp replication features for customers that want it. dinCloud has switched to white box storage for its primary storage.

“We had provided cloud services to existing NetApp customers (but) they are no longer our primary storage provider so we discontinued the service,” said Mike Chase, dinCloud’s chief executive officer. “In large deals, we will consider it. It’s too costly and it lacks a lot of the cloud firmware that we needed. It’s also hard to encrypt. Encryption is possible on NetApp but it’s very tricky. You have to buy an encryption unit.”

Aside from no transfer fees, dinCloud offers 40 GigE compared to Amazon’s 10 GigE maximum. It charges $1,200 a month for a dedicated host compared to Amazon’s $1,440 a month. dinCloud encrypts all data at rest, while Amazon offers no encryption on Elastic Block Storage (EBS) volumes, a practice dinCloud claims is “just plain stupid” in its marketing presentation.

dinCloud also offers an IP Reputation (IPR) service that keeps a database to track every IP address on the Internet and its reputation based on past activity. This means any IP known to a source of criminal activity is blocked. dinCloud states Amazon is “morally deficient” for not offering it.

“Cloud providers are in a better position to offer it than most customers due to better pricing, engineering talent and global reach,” Chase said.

Bagley said dinCloud’s partnership with colocation provider Equinix makes its service a strong option to Amazon. Equinix has 100 data centers worldwide and “they tend to place them where you have high intersections of telecommunications,” he said.

“These guys are claiming to be one third the cost of Amazon S3,” Bagley said. “They can have a client have its own equipment, cage and firmware and still have all the advantages of bandwidth and throughput and automatic resiliency in the network. That is a powerful argument to say we can shut off AT&T and you won’t even know it. They can switch around without breaking stride.”


May 22, 2014  3:15 PM

X-IO switches CEOs, focuses on virtualization

Dave Raffo Dave Raffo Profile: Dave Raffo
ISE, Storage, VDI, Virtualization

Brian Owen today moved into a well-worn – and well-warmed – seat as CEO of X-IO Technologies. Owen becomes the vendor’s ninth CEO in less than 12 years. His job will be to help X-IO find a secure place in the storage world with its ISE storage bricks.

Owen has been CEO of Mapinfo and decalog NV, and before that held executive roles at Computer Associates, Oracle and  DEC.

He joined X-IO in January as vice chairman. Now he’s trading places with John Beletic, who goes from X-IO CEO to chairman. Beletic held the CEO post since 2011.

Owen said he has been meeting with customers since January, and is satisfied the vendor’s technology is sound. He said X-IO needs to focus more on potential customers who are heavily virtualized, including on the desktop.

“The product is real and appropriate, but for specific parts of the market,” he said. “We’re not storage for all. We’re not general purpose storage. We’re high performance storage for a certain class of applications. The key things for me are to make sure we refine our focus and go into markets where we’re relative and highly competitive.

“As you move towards highly virtualized environments, we become far more relevant. If you think about a SAN world, there’s controller and all kinds of features built into a controller. In a post-SAN highly virtualized world, you build features into the hypervisor that used to be in the SAN. We are not heavy in features. We count on hypervisor to deliver those features, and we just deliver pure raw high performance and reliable storage.”

Owen said for VDI, “we’re just a screamer. We have a highly capable product there.” Our sweet spot is high performance reliable performance at a tremendous price point. We can deliver all-flash class performance with hybrid.”

One of Owen’s previous companies, MapInfo, went public and decalog was acquired by SunGaard. Owen would not talk about X-IO finances but said, “We’re in growth mode. We had good growth in the first quarter. Our second quarter last year was a phenomenal quarter so we won’t have as much growth in this quarter but we’re still in growth mode.”

He added that X-IO needs fine-tuning more than a massive overhaul. He called ISE and the hybrid flash ISE “Incredibly modern technologies” and the vendor is “introverted and a little techy. We need to change that and be more solutions-oriented in the way we go to market.”

Owen has already promoted David Gustavsson to COO from VP of engineering and Gavin McLaughlin to VP of worldwide marketing from solutions development director. Gustavsson will continue to head engineering, and take over technical marketing, manufacturing and support.


May 22, 2014  11:34 AM

NetApp sales down; OEM sales way down

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage, NetApp, Storage

NetApp is having the same problems as the other large storage vendors these days – more data going into the cloud, elongated sales cycles, declines in federal spending and new innovative vendors taking customers from the big guys.

NetApp’s earnings and guidance released Wednesday reflect these struggles. But it also has one problem that its large competitors don’t have. That is IBM. The biggest problem for NetApp these days is its OEM revenue is in free fall. IBM is its biggest OEM partner, selling NetApp’s E Series business acquired from LSI as well as NetApp’s FAS storage under the IBM N-Series brand. IBM has been pushing its own storage over its partners’, and hasn’t been had much success selling either.

IBM’s strategy and struggles are taking its toll on NetApp. NetApp’s OEM business last quarter fell 34 percent from previous year, and is forecasted to drop 40 percent this quarter. That caused NetApp’s overall revenue of $1.65 billion to fall four percent from last year. NetApp’s guidance for this quarter of between $1.42 billion and $1.52 billion fell far below what financial analysts expected.

“IBM reports its storage business down significantly in the last few quarters, over 20 percent [last quarter],” Georgens said. “IBM also has a portfolio of products they can sell that are alternatives to NetApp. We have both those dynamics at play — their ability to sell through has been challenged and our positioning within their portfolio has been challenged.”

Analysts on the call wondered why NetApp continues to sell through OEMs. OEM sales make up seven percent of its overall revenue. Georgens said the vendor is investing less in its OEM relationships, while looking for ways to sell the E Series more through its own channels. He pointed to the EF all-flash array as successful E-Series product sold through the NetApp brand.

Some analysts claim other vendors hurting NetApp, particularly smaller competitors who are considered more innovative. In a note to clients today, Wunderlich Securities analyst Kaushik Roy pointed to flash startup Pure Storage, hybrid storage vendor Nimble Storage, hyperconverged vendor Nutanix, and software startup Actifio among those with disruptive technologies hurting larger vendors.

“While Pure Storage and EMC’s XtremIO all-flash products are gaining traction, NetApp still does not have an all-flash array that has been designed from the ground up,” Roy wrote. “It is well known that all-flash arrays are elongating sales cycles and customers are delaying their purchases of traditional hybrid storage systems. But what may not be well known is that new data structures, new analytics engines, and new compute engines are also stealing market share from traditional storage systems vendors. …

“In our opinion, NetApp needs to acquire technologies from outside to evolve quickly and remain one of the leading technology companies providing IT infrastructure.”

On the earnings call Wednesday, Georgens defended NetApp’s flash portfolio even if its FlashRay home-built all-flash array will not ship until the second half of the year. He said NetApp shipped 18 PB of flash storage last quarter, including EF systems for database acceleration, all-flash FAS arrays and flash caching products.

“I’ll state it flat out. I would not trade the flash portfolio of NetApp with the flash portfolio of any other company,” Georgens said.

However, he did not rule out acquisitions.

“We are open to opportunities that are going to drive the growth of the company,” Georgens said. “In a transitioning market where there are a lot of new technologies and a lot of new alternatives for customers, there are a lot of properties out there to look at. For the right transactions, we’d be very much inclined to [buy a company].”


May 20, 2014  12:06 PM

Veeam adds new suite, support for EMC’s DD Boost

Dave Raffo Dave Raffo Profile: Dave Raffo
Data backup, Storage

When I spoke with Veeam Software CEO Ratmire Timashev a few weeks ago, he said the virtual machine data protection specialist is working on beefing up its data availability capabilities. Today, Veeam revealed more details about its next version of Backup & Replication software that is due in the third quarter of this year.

First, Backup & Replication 8 will be part of a new package called Veeam Availability Suite. The Availability Suite combines Backup & Replication with the Veeam One reporting, monitoring and capacity planning application. Veeam will still sell Backup & Replication as a standalone app, but Timashev said the vendor will focus on the suite to stress Veeam’s availability features.

“Instead of talking about backup and recovery, now we’re talking availability,” he said.

Veeam already disclosed one key feature of Backup & Replication 8 – the ability to back up from storage snapshots on NetApp arrays. Veeam Explorer for Storage Snapshots will also allow recovery of virtual machine, guest file and applications from NetApp SnapMirror and SnapVault. Explorer for Storage Snapshots already supports Hewlett-Packard StoreServ and StoreVirtual arrays.

NetApp is one of two major storage vendors Veeam is adding support for in version 8. It also includes EMC Data Domain Boost (DD Boost) integration. That allows Veeam customers using Data Domain backup targets to take advantage of EMC’s dedupe acceleration plug-in.

DD Boost is the first dedupe target acceleration software that Veeam supports, but Tamashiv said the vendor is working with Hewlett-Packard to support its Catalyst client.

Storage vendor support is part of Veeam’s strategy to move beyond its original SMB customer base.

“Most of our customers in the midmarket use Data Domain as a disk target,” Timashev said. “Working with NetApp and EMC positions us stronger in the midmarket and enterprise.”

Other new features in Backup & Replication 8 include built-in WAN acceleration with replication to go with its WAN acceleration for backups, Veeam Explorer for Microsoft SQL and Active Directory, 256-bit AES encryption for tape and over the WAN, and enhanced monitoring and reporting for cloud service providers.

Customers will be able to use the WAN acceleration for replication of backup jobs. Explorer for SQL is similar to Veeam’s current Explorer for Exchange application. Customers will be able to restore individual databases from backups or primary storage.

Timashev said snapshot support and Veeam Explorer allow the vendor to meet its goals of providing 15 minute recovery point objectives (RPOs) and recovery time objectives (RTOs).

“You can take NetApp snapshots every five, 10 or 15 minutes without affecting your production environment,” Timashev said. “We back up these snapshots, and that contributes to our mission of RPO in less than 15 minutes. Veeam Explorer is about fast recovery.”

Availability Suite will be available in the third quarter. Pricing has not been set yet, but it is expected to be slightly higher than Backup & Replication pricing. Backup & Replication’s current per CPU socket prices range from $410 for the Standard Edition to $1,150 for the Enterprise Plus edition.

Gartner storage research director Pushan Rinnen said she agrees with Veeam that greater storage vendor support will help it move into the enterprise. She said Data Domain integration will also strengthen Veeam’s dedupe performance.

“A lot of enterprises have adopted Data Domain as a disk target,” she said. “Data Domain probably has a much better dedupe ratio than Veeam. In some cases, it doesn’t make sense to turn on dedupe on the source side when you can just have the target-side dedupe.”

Rinnen said the replication improvement “allows Veeam to do more failover and failback, helping with DR.”


May 19, 2014  11:23 AM

Is the term elastic storage a new category, or a stretch?

Randy Kerns Randy Kerns Profile: Randy Kerns
Cloud storage, Elastic storage, Storage

EMC and IBM recently launched storage products with the term “elastic” in their names. These announcements were significant for the companies and for the IT community in understanding a direction being taken for storage technology.

EMC launched Elastic Cloud Storage that incorporates ViPR 2.0 software onto three models of hardware platforms. The hardware consists of commodity x86 servers, Ethernet networking, and JBODs with high capacity disk drives. ViPR 2.0 brings support for block, object, and Hadoop Distributed File System (HDFS) protocol storage.

IBM’s Elastic Storage is an amalgam of IBM software solutions led by General Parallel File System (GPFS) and all the advanced technology features it provides.  The announcement included server-side caching and a future delivery of the SoftLayer Swift open source software. In addition, IBM Research developed StoreLets that allow software to be run at the edge (storage nodes in Swift) to accelerate data selection and reduce the amount of data transferred.

Elastic is not a new description or label for storage.  Amazon Elastic Block Storage or EBS has been the primary storage solution used by applications that execute in Amazon’s EC2.  Elastic is a new label from more traditional storage vendors, however.  These solutions are being associated with cloud storage and extreme scaling – termed hyperscale by EMC and high-scale, high-performance by IBM (note that IBM already uses the term Hyper-Scale with the Hyper-Scale Manager for XIV that consolidates up to 144 XIV systems).  Deployment for private/hybrid clouds is mentioned repeatedly in addition to cloud environments deployed by service providers as targets for elastic storage.

But in the world of IT, we like to fit products and solutions into categories. Doing so helps to understand and make comparisons between solutions. Categorization is also a big factor in having discussions where both parties can easily understand what is being discussed.

These elastic storage discussions are a bit more complex and require more of a description of how they are used than just a product discussion. The initial thought about EMC Elastic Cloud Storage is that it is ViPR delivered in a box. That is true but it is more than that.  The box concept doesn’t really foster the immediate understanding of what the system will be used for in IT environments.  For IBM, Elastic Storage could be seen as GPFS on a server—a solution that has already been offered as SONAS, Storwize V7000 Unified, and the IBM System X GPFS Storage Server. But again, there is more to IBM Elastic Storage than that.

So, we have a new name that may become a category.  It is still too early to tell whether that will have real traction with customers or remain a marketing term. Ultimately, it’s about IT solving problems and applying solutions. Storing and retrieving information is the most critical part of any information processing endeavor and involves long-term economic considerations. The term elastic is a new designation for storage systems, and is currently equated to using commodity servers and JBODs with custom software.  Attributes about performance, scaling, advanced features, and reliability go along with the systems and are highlighted as differentiating elements by vendors.  Elastic may be a new category, but the name is not yet sufficient to understand how it solves the problems for storing and retrieving information.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 16, 2014  10:22 AM

Sphere 3D buys Overland Storage for $81 million

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Overland Storage is going away. At least, the company name will disappear after its merger with Sphere 3D is completed. Overland’s products will live on, whether or not they have the Overland brand.

Overland and Sphere 3Drevealed their merger plans Thursday. You need a scorecard to keep track of the two companies’ recent mergers. Sphere 3D acquired virtual desktop infrastructure (VDI) software startup V3 Systems in March and now it is merging with Overland, which merged with tape and removable disk cartridge vendor Tandberg Data in January. Overland CEO Eric Kelly, who is also chairman of Sphere 3D, said the Tandberg merger is proceeding as planned. The companies have completed the first of three phases, with phase three scheduled to wrap up by the end of the year.

Sphere 3D will pay $81 million for Overland stock, and the combined companies will be called Sphere 3D. Kelly and Sphere 3D CEO Peter Tassiopoulos discussed the deal on a conference call with analysts Thursday but did not address what the management structure would look like. However, it would make sense for Kelly to remain chairman and Tassiopoulos to stay on as CEO. The execs did not give a protected date for closing the deal, which requires shareholder approval.

Kelly became Sphere 3D chairman last September when the two vendors formed a partnership around developing a storage platform for application virtualization.

Sphere 3D’s Glassware platform allows companies to put applications from PCs, servers and mobile devices in the cloud. The companies have an integrated product running Glassware technology on Overland SnapDX2 NAS appliances.

Kelly said the first phase of the Tandberg acquisition – including integration of supply chains and internal operations – was completed in March and the second phase is due to finish by the end of June. Overland CFO Kurt Kalbfleisch said he expects the Tandberg merger to reduce the companies’ operating expenses by at least $45 million by the end of 2014.

Overland’s long history of losing money continued last quarter when it lost $6.6 million, despite a sharp increase in revenue following the Tandberg deal. Revenue of $22.3 million was double the revenue from the same quarter last year and up from $10.6 million in the last quarter of 2013.

Kelly said the Sphere 3D merger means “as a combined company, we now have greater financial and operational scale, and a clear path for growth and profitability.” He said the business strategy will include selling software, cloud services and appliances. He did not discuss plans for any specific products in Overland’s tape and disk backup, SAN or NAS families.

Of the combined Glassware-SnapSever DX2 product, Kelly added, “as you start looking at what’s happening in the industry in terms of virtualization, in terms of cloud, and how that integrates with the back-end storage, you see that by putting the two technologies together, we have been able to deliver a product line that we believe is the first to the market.”

Kelly said Sphere 3D’s technology will also work with Tandberg’s products, which include tape libraries and drives, RDX removable disk, disk backup and low-end NAS.


May 14, 2014  4:21 PM

Atlantis partners with VMware for VDI, VSAN

Dave Raffo Dave Raffo Profile: Dave Raffo

VMware Virtual SAN (VSAN) can be a disruptive force among the rapidly growing roster of software-defined storage startups. But rather than fight VMware, Atlantis Computing wants to play a complementary role to VSAN.

Atlantis today said its ILIO software platform supports VSAN and VMware Horizon 6 VDI software, and that channel partners will bundle Ilio with the VMware software. That’s no surprise. During the VMware Partner Exchange in March, Atlantis said it partner with VMware to bundle its new USX software with VSAN. Atlantis VP of marketing Gregg Holzrichter said that meet-in-the-channel relationship will go into effect within the next six weeks.

Atlantis had focused on VDI with its Ilio software until it launched USX for virtual servers in February. With USX, the startup can now reduce the amount of storage needed for virtual desktops and virtual servers. Holzrichter said the VMware-Atlantis partnership will revolve around VDI, which VMware has identified as one of the major use cases for VSAN. The Ilio USX software can provide data management features still lacking in the first version of VSAN. These include deduplication and compression, key technologies for VDI.

“We’ve been working with VMware to show how the Atlantis Ilio platform extends the capabilities of VSAN in a positive way,” Holzrichter said. “It’s an interesting combination where we allow you to drive down the cost per desktop significantly compared to traditional storage.”

It will be interesting to see where the partnership goes. If there is strong customer interest in using Ilio with VSAN and Horizon, VMware might OEM the software or acquire Atlantis as it did with Atlantis rival Virsto in 2013.

Then again, this could be a temporary arrangement until VMware develops its own data management features, or imports them from its parent EMC or Virsto. VMware no longer sells Virsto software but is expected to add Virsto technology to VSAN.

Holzrichter, who previously worked for VMware and Virsto, said there is room for both Virsto and Ilio technology with VSAN. “If VMware does implement the data services of Virsto, that will not overlap with the Atlantis data services,” he said. “Virsto has best in class snapshots and cloning technology, where Atlantis has best in class inline dedupe, compression, I/O processing and a unique way of using server RAM.”

Atlantis this week also said it has been awarded a patent for its content-aware I/O processing.


May 14, 2014  9:32 AM

Storage lifespans: don’t confuse technology with data

Randy Kerns Randy Kerns Profile: Randy Kerns

Clarification is needed about what lifespan means regarding storage because confusion is created by the way product messaging refers to both in the same context.

Lifespans of storage systems refer to many things: wear-out mechanisms for devices, technology obsolescence in the face of new developments, inadequacies of dealing with changing demands for performance and capacity, and physical issues such as space and power.

The wear-out mechanisms are tied to support costs, which typically increase dramatically after the warranty period that could run three years to five years in enterprise storage systems. These issues all lead to a cycle of planned replacement of storage systems, often triggered by the depreciation schedule for the asset.

For the information or data stored on a storage system, the lifespan depends on the characteristics and policies of that data. Information subject to regulatory compliance usually has a defined lifespan or period of time it must be retained. Other data may have business governance about retention. Most of the data is not so clearly defined, and is left to the owners of the data (business owners in many discussions) deciding about the disposition. Typically, data is retained for a long time – perhaps decades or even forever.

There is confusion about how to update the storage technology without regard to what is the content stored. This requires changing technology without disrupting access to the data, without requiring migration that entails additional administrative effort and operational expense, and without creating risk of impacts or data loss. These concerns are addressed with the many implementations of scale-out technology delivered with NAS or object storage systems.

Clustering, grids, ring, or other interconnect and data distribution technologies are key to scale out. Nodes can be added to a configuration (cluster, grid, ring, etc.) and data is automatically and transparently redistributed. Nodes can be retired – automatically where data is evacuated and redistributed and once empty, the node can be removed – all with transparent operation.

These scale-out characteristics allow storage technology to progress: new technology replaces old. This usually happens within the constraints of a particular vendor software or hardware implementation. The important development is that data is independent of the storage technology change.

For data, the format and the application are the big issues. Data may need to be converted to another form whenever the application that can access the data changes (meaning there is no longer support for that format, etc.). Being able to access data from an application is more important than merely storing information. The ability to understand the data is independent of the storage. Updating technology and progressing data along the storage technology improvements is possible and is being addressed with new scale-out systems. Dealing with formats that persist over time is another issue that can be independent of the storage technology.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: