Storage Soup


April 2, 2014  8:04 AM

Open Source Storage hits the comeback trail

Dave Raffo Dave Raffo Profile: Dave Raffo

An early open-source storage player is back, seven years after going out of business mainly because it was ahead of its time.

Open Source Storage (OSS) re-launched in January with an undisclosed amount of funding from private investors and has since released two product lines.

OSS first launched in 2001 and was gone by 2007 despite landing a few big customers including Facebook.

“We were the guys who pioneered this,” said OSS CEO Eren Niazi, who was 23 when he founded the company. “We started the open source storage movement.”

He said the movement stalled because the storage ecosystem did not warm to OSS. “A lot of tier one vendors didn’t want to work with us and investors didn’t want to back us,” he said. “The business came to a halt.

“Seven years later, people say ‘Open source storage, I get it, it’s exactly what I need.’’

Of course, OSS faces a lot more competition in 2014 than it did in 2007. Today there are open source storage options such as OpenStack, Hadoop, Ceph, and products built on ZFS. Still, adoption remains low as vendors such as Red Hat, SwiftStack, Cloudera, Nexenta, Inktank and now OSS are trying to break out.

OSS’s Open Cluster software can run on solid-state or SAS drives and supports block or file storage. Niazi said OSS has more than 30 customers with others evaluating the software. He said his products are used mostly by enterprises “with large data demands and large deployments and are trying to reduce their costs.”

OSS produces are based on its N1.618 Plug and Play Middleware and open-source software. Last month it brought out the Open Cluster ZX that scales to 1,024 nodes. Open Cluster ZX is built for virtual servers based on OpenStack as well as NAS, object storage and virtual machine-aware storage. OSS this week added its Open Cluster Cloud Series designed for virtual servers, cloud-based services, high performance computing and big data analytics. The cloud series comes in two-node and four-node bundles.

April 1, 2014  2:55 PM

Which storage features add value?

Randy Kerns Randy Kerns Profile: Randy Kerns

Looking at advanced features is always a critical step when reviewing storage systems because of the value these features bring.  There are a large number of storage system-based features, and variations exist between different vendor implementations. But taking a step back, it is interesting to examine where these features really belong. They were developed in storage systems to fill a need and each feature could be applied at a single point, regardless of the different hosts accessing the information.

To start a discussion about where the features really belong, let’s examine the more commonly used ones. This is not a call for change because change is unlikely.  It is a discussion that may assist in some understanding of how these features can help.

Encryption. Encryption should be done at the time an application creates or modifies data and before the data is transmitted out of the application’s control. This means the data would be encrypted when it is transmitted over any network to a storage location. Access to data from another server would require the authentication and the encryption keys for decrypting the data. Encrypting data at the storage system is not the best location because applications can access the data without controls regarding encryption. Encryption at the storage system is protection from physical theft of devices.

Data protection through backup. Decisions about backup should be made by the application owner or business unit.  In most environments, IT makes a broad-based decision about data protection as a standard policy and applies that to data – usually on a volume basis.  The actual value of the data and the corresponding protection requirements may not be known (or as continued knowledge) by IT. Data protection should be controlled and initiated at the application level. The same holds true for restoration.

Data protection for Disaster Recovery and Business Continuance. Recovering from a disaster or continuing operations during a disaster or interruption is a complex process and requires coordination between applications and storage systems. Storage systems make replicated copies of data to other storage systems and the process to failover or recovery is orchestrated according to a set of rules. If the applications could make the copies (simultaneous writes to more than one location) without impact to operations (in performance, etc.) the storage system would still store the remote copy but the recovery process and coordination would be done by the application at a single point with knowledge of the information requirements.

Compliance management. This is a set of features that help meet different regulatory requirements. Each of those represents a challenge. Only a few storage systems do all of these:

  • Retention controls – these protect data from being deleted until a certain time, event, and/or approval. These can be done in software if the software is the only method to access data.  Because this may defeat the flexibility of how data is stored for archiving, the storage system may be the best location for this function with software controlling the operation.
  • Immutability – protection from data alteration has been implemented in storage for some time.  With many sources of access, immutability at the storage system would still seem to be the best location.
  • Audit trail for access to data – requirements to track access to data can be done in the application if there is no way data could be accessed otherwise. Typically there may be other means, so having the storage system handle the audit trail is probably the best solution.
  • Validation of integrity – this requirement is a typical feature of storage systems anyway so continuing this in the storage system would be expected.
  • Secure deletion – this is implemented as a digital overwrite of the data on the storage device.  Because of device characteristic knowledge, this should continue to be part of the storage system.
  • Legal hold – this has the same issues as retention controls.

It’s unlikely that we’ll see any substantial change in the way things work regarding storage features in existing IT environments. New implementations for private clouds may do things differently, which means applications would have different requirements. It will be interesting to see what develops as private clouds are deployed within traditional IT environments.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 31, 2014  10:46 AM

Backupify prepares data protection for Box, Dropbox, Office 365 and others

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud-to-cloud backup vendor Backupify this year will move far beyond its current products protecting Google Apps and Salesforce.com.

Backupify today disclosed its 2014 roadmap will be its most ambitious by far with plans to add support for 12 cloud applications this year including Box and Drobpox. Other cloud applications on Backupify’s 2014 roadmap include Netsuite, GitHub, Zendesk, Concur, ServiceNow, JIRA, Asana, Egnyte, Office 365, and Basecamp.

Backupify CEO Rob May said backup for Box will probably be Backupify’s next product, coming within the next month or so. The roadmap calls for support for NetSuite, Egnyte and JIRA in the first half of 2014 with the rest on the list to follow later in the year.

May said Backupify can add backup products faster now because of the new developer platform it launched last October with a set of open APIs.

“You’ll see an increased frequency of launches as the year goes on,” he said. “We’ll add maybe an application a month for three or four months, then an application every week and maybe an application a week by the end of the year.”


March 27, 2014  9:30 AM

Amplidata receives funding, seeks partners

Dave Raffo Dave Raffo Profile: Dave Raffo

Amplidata, an early object storage vendor, picked up $11 million in funding this week. That’s a small haul compared to recent funding rounds for storage companies, but CEO Mike Wall said it will fuel a two-pronged strategy for Amplidata.

On the technology front, Amplidata is moving towards a software-only model. Today it sells its AmpliStor software with erasure coding packaged on commodity hardware, either its own bundles or through partners.

That brings us to the second part of Amplidata’s strategy – expanding its partnerships. Wall said he is working on more deals such as the OEM relationship it has with Quantum, which uses AmpliStor on its Lattus archiving platform. Quantum was among the investors in AmpliStor’s new funding round.

“I want to be software-only,” Wall said. “Early on we shipped hardware, and we still do. But Intel has put out a reference design for hardware vendors, and we’re in a position where we have a world-class erasure code stack that can run on pick-your-flavor hardware. Our business model will be, you can buy your hardware at the best price with the best performance and best quality you’re comfortable with, and we’ll provide you with the software.”

Wall said Amplidata will maintain its channel but he expects the bulk of its revenue to come through partners. He said he is in various stages of discussions with large potential partners, including a telco/ISP company that is in beta now with a cloud storage offering using AmpliStor.

Regardless of how AmpliStor is sold, Wall said he expects object storage momentum to accelerate over the coming months. “Five years ago, this was a nice-to-have technology but not a must-have,” he said. “Over next 12 to 24 months, it will be a must-have. When you look at the amount of data generated and the size of disk drives and data sets, traditional RAID methods are not sufficient. You need erasure coding. When you do all that in software and on commercial off-the-shelf hardware, it’s too compelling to ignore.”

Intel Capital led Amplidata’s latest round, which brings Amplidata’s total funding to $33 million. Hummingbird Ventures, Endeavor Vision, Swisscom Ventures and Quantum all participated. All were previous investors.


March 25, 2014  4:46 PM

Google slashes cloud storage prices

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Today, Google took its turn dropping its cloud storage prices.

Google today announced it was cutting pricing by as much as 68 percent for its cloud storage services, while also eliminating pricing for tiered services and introducing a flat rate for its Google Cloud Storage standard and Durable Reduced Availability (DRA) storage.

The price for the Google standard storage is 2.6 cents per Gigabyte per month and the DAR costs is down to 2 cents per Gigabyte a month. Previously, Google’s cost structure was more complicated because customers paid a higher price on the first terabyte stored and the price per terabyte dropped as the capacity stored in the cloud grew.

“This is the most dramatic price drop we have seen and it’s the most dramatic change of the model as it goes from tiering to a flat rate,” said Nicos Vekiarides, CEO of TwinStrata, a cloud storage gateway vendor whose products move data to the Google cloud. “We were informed as a partner. Did we expect it to be this dramatic? I think a lot of folks are surprised right now.”

Competitors like Amazon and Microsoft Azure will likely respond in kind, given the pattern of these price drops in the past. Cloud providers have been engaged in a price war in what analysts characterize as a land grab or “race to the bottom.”

“What is interesting is how they will respond,” Vekiarides said. “Traditionally, it’s been an even playing field and they all kept it that way.”

Google’s price slash makes its storage services one of the cheapest on the market. The only options that cost less are cold storage services Amazon Glacier, priced at a penny per Gigabyte per month, and EVault LTS2 Local-Redundancy at 1.5 cents a Gigabyte per month. However, both Glacier and EVault have extra costs baked into their offerings.

“Both have greater charges for taking data out, particularly if they do it sooner than 90 days,” said Lorita Ba, TwinStrata’s director of marketing.

The low pricing is designed to lure more customers into the cloud but it’s not the only variable that companies look at when considering moving data to the cloud. They need to look at performance, how the cloud service integrates with environments, and how it helps solve the issue of maintaining capacity and growth along with how it works in a disaster recovery situation.

“There are a lot of elements to cloud storage solutions,” Vekiarides said. “But this enhances the value of replacing on-premise storage with the cloud. It makes the economic case that much easier.”


March 24, 2014  4:21 PM

Actifio pockets $100 million, plans product expansion

Dave Raffo Dave Raffo Profile: Dave Raffo

Copy data management vendor Actifio closed a $100 million funding round today. The round is likely the last funding it will need, and brings its total funding to $207 million and its valuation to more than $1 billion.

Andrew Gilman, Actifio’s director of global marketing, said the vendor will follow with a product launch soon moving its software down into the mid-market. Actifio has focused on large enterprises and cloud providers, with an average selling price of $349,000.

The Actifio software creates virtual copies of data so it can be placed in any location and used for multiple purposes. The first use case to gain traction was backup but it has other use cases for companies looking to reduce the copies of data they store.

Gilman said most (51%) of Actifio’s customers use its product for data protection, with 22% using it for resiliency (business continuity/disaster recovery), another 22% percent for test/development and the rest for analytics. He said 22% of the customers displace EMC, 15% displace Symantec, 13% displace CommVault and the rest displace other vendors’ products.

Actifio claims the funding comes after a big 2013 year in which its bookings grew 182% over 2012. The vendor claims more than 300 enterprise users worldwide, and more than 25 cloud providers including IBM’s SmartCloud Data Virtualization and SunGard’s Recover2Cloud DR services. Actifio has customers in 31 countries.

Gilman said Actifio has about 260 employees and intends to grow significantly with the funding. He said an initial public offering (IPO) is planned but Actifio is in no hurry because “we’re very methodical in everything we do.”

“We want to continue to build out our platform and take the product into new areas,” he said. “We will use this to go down into the mid-market and invest in end-user success.”

Along with new product releases, Actifio will invest in what it is calling a Customer Success Engineering group led by David Chang, current VP of products and a founder of the company along with CEO Ash Ashutosh. “It’s important that each of our users is delighted,” Gilman said. “We take this seriously as data custodians.”

New investor Tiger Global Management led the round with previous investors North Bridge, Greylock IL, Advanced Technology Ventures, Andreessen Horowitz, and Technology Crossover Ventures participating.


March 21, 2014  9:43 AM

Will Symantec hire a CEO with storage background?

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec’s stunning firing of CEO Steve Bennett leaves the company with a third CEO in less than two years.

The move came as a big surprise because Bennett spent about half of his 18 months as Symantec CEO plotting a turnaround plan, and implementation of that plan is far from complete. Following the departures late last year of president of products Francis deSouza and CFO James Beer, Symantec will have a vastly different leadership team after a replacement is found for Bennett. Board member Michael Brown is the interim CEO.

The statement Symantec released Thursday about the firing quoted chairman Daniel Schulman saying the decision “was the result of an ongoing deliberative process, and not precipitated by any event or impropriety.” Financial analysts point to lack of growth and loss of market share on the antivirus/security side as Symantec’s biggest problems, but there have been issues on the backup and storage side as well. The biggest problem was the Backup Exec 2012 fiasco, which still is not fixed more than two years after its initial release.

Perhaps the main overall problem with Symantec is it is two companies under one umbrella. Symantec has never made storage and data protection an equal partner with security after acquiring storage software vendor Veritas in 2005. None of Symantec’s CEOs since then – John Thompson, Enrique Salem or Bennett – were storage guys, and there have been intermittent rumors that the storage products would be spun off.  Veritas was a major storage market influencer as a standalone company, but Symantec is not seen as much of a force in the storage world.

Security and storage have mostly been separate divisions, and the two-pronged approach hasn’t worked. Symantec’s flagship backup application NetBackup is still doing well, and the decision to sell it as part of integrated appliances has worked out. But BackupExec is a mess, Symantec’s storage management message is muddy, and it is also losing its iron grip on the anti-virus market.

It would help if the new CEO has experience in storage or data protection. Interim CEO Brown has that experience as a former CEO of Quantum and a Veritas board member before the merger. However, indications are that Brown will only hold the job until a permanent CEO is found. Let’s hope the search committee keeps storage in mind when screening candidates.


March 20, 2014  4:00 PM

Tintri moves into VMware’s vSphere

Dave Raffo Dave Raffo Profile: Dave Raffo

Tintri, which designs its VMstore storage appliances to be virtual machine-friendly, is releasing a plug-in to let customers manage VMstore inside of VMware’s vSphere.

The plug-in lets Tinri customers manage their VMstore appliances from the vSphere vCenter management tool. It makes VMstore dashboards visible from the vCenter server, and they can get alerts and monitoring information there. They can also set snapshots, clones and replication policies in vCenter.

“The end users care about the ESX application or the virtual desktop or the SQL Server application, not so much the storage system,” said Saradhi Sreegiriraju, Tintri senior director of product management. “We’ve exposed all the information from our VMstore dashboard into vCenter. Anything you can do from the VMstore UI – snapshots, clones, replication or monitoring – you can now do from the vCenter UI.”

TheTintri vSphere Web Client Plugin will be available next week as a download from Tintri.

Tintri’s selling point is it lets customers provision storage from the VM-level instead of having to deal with the LUNs and volumes associated with traditional storage arrays. Its greater integration with VMware comes as VMware moves more into storage with its virtual SAN (VSAN) software that turns hard drives, solid-state drives and compute running on VMware-connected servers into networked storage. VSAN is seen mainly as competitive to hyper-converged storage systems such as those from Nutanix, SimpliVity, Scale Computing and Maxta but it can also hurt VM-aware storage vendors. After all, VSAN enables companies to do many of the same things as Tintri does.

Sreegiriraju said Tintri doesn’t consider VSAN a competitor because VMstore has been on the market for three years and its hardware is tuned to work with VM-aware software. He said VSAN will compete more with traditional storage systems. “VSAN is validating the architectural underpinnings that we have,” he said. “We agree with VMware that you need a system that understands VMs at the VM level rather than  at the LUN level.”


March 20, 2014  9:49 AM

Data Dynamics automates file migration

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Data Dynamics Inc., a new company selling a decades-old product, has enhanced its StorageX file management software to simplify storage migration planning and automate the ability to map metadata characteristics between source and target file servers in data migration projects.

Data Dynamics came out of stealth last September to breathe new life into the StorageX file virtualization application originally developed by NuView Systems in 2002. Brocade acquired NuView in 2006, but killed StorageX in 2010. Data Dynamics positions StorageX as a file migration tool rather than for file virtualization.

StorageX 7.1’s new Advanced Design Mode provides an exportable grid view of a file server that allows administrators to regulate volumes on block LUNs, define the SnapMirror source and destination, adjust de-duplication ratios, sett quota limits, and change volume sizing. The grid looks like a spreadsheet,

“It identifies the source infrastructure in a file server,” Data Dynamics CEO Piyush Mehta said. “As you populate the source information, it automatically asks pertinent questions on the target environment such as identifying the filer of NAS device, what is the volume name and what is the volume size.”

It also automatically creates policies that help trigger the data movers. Previously, the process was done manually. IT administrators would have to understand the metadata from the source, and create shares and exports.

“Once they created those, they have to write scripts to move the data,” Mehta said. “It’s a fully manual process. It takes hundreds of hours and the risk of errors is high. (With our module) you save deployment time.”

StorageX supports NetApp Data Ontap, EMC VNX and EMC Isilon application programming interfaces (APIs) so data migration can be done within and across those systems. Customers can use it to migrate from a NetApp filer to an EMC filer or vice versa.

The software resides on a VMware hypervisor and uses replication agents to pull data from the source and push it to the targets. It supports both CIFS shares and NFS exports within a single console. It moves data while it’s still in use. It takes an initial copy and makes subsequent copies of the changes. It also detects NFS and CIFS access controls or security permissions and ensures file attributes are migrated correctly.

StorageX also has been upgraded with better reporting capabilities to export, sort and filter views and reports for selected storage resources. It can monitor agent utilization and trends, oversee device exports and shares, and audit migration policies and execution.

“The enhanced reporting provides utilization and trending of the entire migration process, the overall activity taking place across devices from the source to target side,” Mehta said. “It tells what various states of the migration policies.”


March 18, 2014  10:35 AM

What do you do with old storage systems?

Randy Kerns Randy Kerns Profile: Randy Kerns

During a recent explanation I was giving on the lifespan of enterprise storage systems, I received an interesting question: what happens with systems taken out of service? There is a tendency to give a flip answer to that question, and it did bring laughter. But it is a legitimate issue, and I tried to explain what usually happens.

The most common option when replacing an enterprise disk-based system for primary storage for critical applications is to demote that system to usage for secondary storage. Secondary storage can mean less performance-critical application data storage, a backup disk target, or test/development data. The timing for replacing a system varies, but for larger enterprises the system is often used as primary storage for three years and is replaced after five years.  The cadence for replacement is usually dictated by maintenance costs, increasing failure/service rates, and technology change.

After that end of “useful life,” what happens to the old storage systems?   For systems that are purchased, a depreciation schedule is applied by the accountants and IT does not usually expend the time and effort to challenge accounting practices. If it is a leased storage system, the system “disappears.”  The “disappears” statement is one of those flip answers.  Decommissioning or demoting storage is a big effort for IT.  Data has to be migrated and procedures have to be changed.  There is potential for big problems, because these changes introduce risk.  But the storage system is taken out of service, out of the data center, and out of the building.

A leasing company may sell the system to a company that uses it for repair parts for other companies that want to hold on to systems for a longer period of time. An organization can save money if it is willing to use out-of-date with technology. However, not only will new storage systems get faster in that time, there will be savings in space, power, and cooling with new technology that could make the old systems more expensive than new models.

A purchased system can be sent to a recycling company.  A recycling company will recover components that have value and make a profit from selling the extracted elements. It’s not always clear where these recycling companies are located and how they dispose of the systems.

Another way storage systems may be disposed of is by paying a company to haul them away. That company will sell the systems by the pound to a company that sees value in the metal pieces –- racks, doors, slides, chassis, etc.

There aren’t many other options, although a few other ideas came up during our conversation:

  • Start more computing museums.  It seems that when people have old cars they love, but not enough to continue driving them because their useful life has ended, they put them in car museums. Why not do more of this with technology systems?
  • Give them to art schools so they can create some modern art sculptures out of them.
  • Give them to universities for educational purpose.

There are probably some other clever and funny ideas. Maybe the best solution is to invest in systems with greater longevity or with architectures where technology updates can be applied independently.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: