Storage Soup


October 18, 2013  4:57 PM

Syncsort divorces itself

Dave Raffo Dave Raffo Profile: Dave Raffo

Syncsort is splitting up. The company revealed this week that the data protection business spun off to form a company separate from the data integration business.

The data integration side will keep the Syncsort name. The data protection business will be known as Syncsort Data Protection for now, but will eventually pick a new name for its company. Flavio Santoni, who was Syncsort CEO, will run the data protection business and Lonne Jaffe becomes Syncsort CEO.

Santoni said while the data integration business is bigger than the backup business, he went with the backup team because his background is in storage. Santoni was general manager of LSI’s Engenio storage unit (now owned by NetApp) before becoming Syncsort CEO.

Santoni said the split was planned for more than a year and made sense because the product lines required separate sales, marketing and management teams. He said Syncsort had been run as two separate businesses since the start of this year, and the sales, marketing and engineering teams were independent for four years.

“We’ve been on this path for a while,” he said. “We realized in 2012 that we had a clear path on both businesses. It was time to create two pure-play companies, each with its own management team and investment group.”

The data protection business will be built around the ECX file and snapshot catalog application rolled out last week. The first ECX version works with NetApp and VMware products, but Santoni said the platform will expand.

“The technology is flexible and extensible, and we can extend it to other vendors,” he said. “Catalog is the first service module. Over time, we’ll add backup as another service module and then other service modules.”

One thing that won’t change is the employees’ commute. The companies will continue to share Syncsort’s Woodcliff Lake, N.J. office building, each taking one of its two floors.

October 18, 2013  1:43 PM

Nimble follows Violin’s lead, files for IPO

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage is trying to become the next public storage company after filing its S-1 form for an initial public offering today.

Nimble is right on time with its IPO plans. When the startup raised $40.7 million in funding in September of 2012, CEO Suresh Vasudeven said the goal was to go public between the third quarter of 2013 and the second quarter of 2014.

Nimble, which sells hybrid flash arrays that handle primary storage and data protection, reported $53.8 million in revenue for its fiscal year ending Jan. 31, 2013 and has already generated $50.6 million in the six months of this year that ended July 31. It has more than doubled the $19.1 million in revenue from the first six months of 2012.

However, Nimble is also losing money. It lost $27.9 million last year and $19.9 million in the first six months of this year. The vendor, which began selling its CS arrays in August, 2010, has a total of $77.8 million in losses. It raised $98.7 million in funding, with the last round coming in Sept. 2012.

Nimble wants to raise $150 million with its IPO, according to the filing. All-flash array vendor Violin Memory raised $162 million when it went public last month after reporting $73.8 million in revenue for the year ending Jan. 31 and $51.3 million in the six months ending July 31. Violin’s losses were $109.1 million for last year and $59.2 million over the six months ending July 31.

Violin’s IPO hasn’t been great for investors so far, though. The company sold its original shares at $8 but the price dropped more than 21% to $7.11 on the first day and stood at $7.26 at mid-afternoon today.


October 17, 2013  3:09 PM

Will object replace file?

Randy Kerns Randy Kerns Profile: Randy Kerns

I keep hearing this question, and I keep responding “not anytime soon,” rather than with a flat “no.” I can understand why some may consider replacing file storage with object storage given developments that are occurring, but file storage still has plenty of life.

Object storage has several substantial benefits over file storage when it comes to solving a couple of problems:

• Scale – The continuing growth in unstructured data means large capacity and much greater numbers of “objects,” usually files that must be stored and managed. Traditional file systems usually cannot handle the billions of objects in their hierarchical file structure effectively. Object storage uses a flat address space with access through an object ID, which may be maintained external to the storage system or as an integral element in a distributed environment.

• Durability – Information must be available, potentially from different geographies and needs a protection process that does not impact operational environments. Also, information has a long life that usually outlasts the devices it is stored on. Object storage systems, for the most part, have addressed these issues by geographically dispersing data so only a specific number of object elements must be available to access the information. The selectable protection also includes immutability of the objects with versioning so an object incorrectly altered is still recoverable from a prior version. Organizations can manage technology transitions with object storage by introducing a new node/location for storage of an object element, replacing one that is to be retired and automatically redistribute object elements.

Here are ways that object storage is being used:

• Web-based storage is typically implemented with object storage using a simple GET/PUT access method. The most common in usage is some form of the Amazon S3 protocol, now simply referred to as S3 or S3-like.

• On-premise object storage systems, often called private clouds, are being promoted by vendors to give companies the web-based capabilities of self-service and massive scalability for those that have reasons for not using public cloud-based systems. These are usually object storage systems with similar access protocols such as S3.

• Some applications are being modified to write to object storage directly and new applications are being developed with cloud-based access (public or private) as the fundamental design and use of the extended metadata. The most commonly modified applications are backup and archiving software. These applications represent some of the first uses of cloud-based storage.

Gateway devices that bridge file access to object storage in clouds are available. These systems allow use of cloud storage from existing applications without modification while adding features such as metadata tagging and file caching.

So why do I say object will not replace file storage anytime soon? Because changes in applications occur slowly, if at all. There will be resistance to making a transition from files to objects. Not only is there a predominance of file-based applications, there is also widespread familiarity with using files and file structures. The current use of files has been effective for most environments and change will be embraced only when it is necessary.

Object storage is most likely to solve problems in backup, archive and large content repositories, and those problems are already being solved by current technology. But while object will not replace file usage today, it does deal with the problems of scale and durability, and opens new opportunities for storage in the future.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 14, 2013  1:49 PM

SimpliVity adds partners to bolster VDI peformance

Dave Raffo Dave Raffo Profile: Dave Raffo

Like a bunch of other startups, hyperconverged vendor SimpliVity is going after customers who struggle with storage for a virtual desktop infrastructure (VDI).

SimpliVity today disclosed partnerships with graphics card vendor Nvidia and PC over IP (PCoIP) developer Teradici that will enable SimpliVity OmniCube customers to use outside technologies to improve VDI performance. These partnerships do not include OEM or reseller agreements, but SimpliVity will include Nvidia and Teradici products in reference architectures built for VDI storage.

Nvidia has VDI-specific graphics cards and Teradici developed PCoIP that enables offloads compression to hardware chips on the server and remote clients. VMware uses Teradici PCoIP technology in its VMware Horizon View VDI software and Riverbed partners with Teradici to embed PCoIP in its Steelhead WAN optimization appliances to improve VDI performance.

VDI has been a common use case for systems built specifically for virtual machine (VM) storage and storage arrays that use flash. Along with SimpliVity, converged vendors such as Nutanix, Pivot3 and Scale Computing, VM-aware storage vendors Tintri and Tegile, flash array vendors Nimble Storage, Nimbus Data, Pure Storage and software vendors Atlantis and GreenBytes list VDI as a key driver of their business. The larger storage vendors also see VDI as a key storage use case.

“We’ve always enabled VDI,” SimpliVity vice president of marketing Tom Graves said. “VDI is a lot of VMs basically. They’re desktops, but the form factor is a lot of VMs. When you load up a traditional infrastructure with virtual desktops, there’s a lot of complexity. The more we talk to customers, the more we see VDI is a mission critical application.”

Graves said the use of host-based Nvidia and Teradici technology gives SimpliVity an advantage over traditional storage arrays in handling VDI because OmniCube stacks include virtual servers. SimpliVity already has features that should help with VDI such as inline deduplication and support for one-to-one persistent desktops.

The startup will need all these features and partnerships to stand out now that nearly every storage vendor is going after VDI.


October 14, 2013  1:49 PM

CloudByte looks to displace other ZFS storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Despite the company name, CloudByte’s ElastiStor software isn’t limited to cloud storage. The ZFS-based software provides Quality of Service (QoS) for storage capabilities separately for each application, and can apply to data centers and virtual machine storage as well as clouds.

However, the startup is concentrating on attracting cloud provides with  features in ElastiStor 1.2 designed to attract cloud providers.

These features include the ability tt migrate data from other ZFS-based vendors through the ElastiStor console, SAS multipathing and support for high availability failover for Fibre channel storage.

CloudByte CEO Greg Goelz said the changes provide flexibility for cloud providers to change their storage infrastructure and use multiple technologies. The ability to migrate from other ZFS systems does that plus can help CloudByte get a foothold with new service providers.

“If you can leave data in place and move to a new storage controller, than there is truly no vendor lock-in,” CloudByte CEO Greg Goelz said. “We can move people to a cloud-based solution.”

CloudByte recently picked up cloud provider Netmagic as a customer. Shriranga Mulay, Netmatic’s SVP of engineering, said his company was sold on the QoS features and ability to avoid vendor lock-in. Mulay said Netmagic will use CloudByte software to guarantee various performance levels in its storage services.

“We plan to use it for a service where we can guarantee performance,” he said. “We’ll use it in connection with existing services, but to differentiate service levels.”


October 4, 2013  9:35 AM

Verizon jumps into cloud storage market

Dave Raffo Dave Raffo Profile: Dave Raffo

Verizon Terremark this week launched an object-based storage cloud that will compete directly with Amazon S3, Google Storage Cloud, Microsoft Windows Azure and others.

Verizon Cloud Storage is part of Verizon’s overall cloud compute services. Tom Mays, SVP of data solutions for Verizon Terremark, said the main target customers for the cloud storage will be enterprises and government agencies.

Mays said Verizon is using a commercial object storage software, optimizing it and integrating it into a hardware stack that it built itself. It will support SOAP, REST and the Amazon S3 API. He declined to say whose object software is in the stack, but he said it comes from a commercial vendor and not OpenStack.

Verizon Cloud Storage will also support cloud gateways on the market. Mays said it will soon announce which gateways it will support. The storage cloud is now available as a paid public beta.

Pricing is not yet set but Mays said the service will be priced according to levels of durability. He said it currently can protect against seven simultaneous failures, but so far all of the data is in one of Verizon’s 50 global data center. He said more data centers will be added, and the early roadmap includes a three-site geographic distribution setup.

Verizon is looking to expand service to current backup customers, hoping they will add more sites to store multiple copies of data.

“Until now we’ve lacked an object addressable storage platform,” Mays said, adding that the idea of object storage is transparent to most customers.

“Customers are coming to us saying they just want cheap storage in our data center,” Mays said. “They don’t care if it’s file, block or object. The advantage with object storage is the ability to do more efficient geographic distribution than traditional volume-based parity storage systems.”

Mays said he expects storage pioneer Nirvanix’s sudden demise to help Verizon’s storage cloud, although he said it may be too late to pick up Nirvanix customers. While the Nirvanix issue will prompt companies to re-think the cloud, Mays said it also places greater emphasis on going with larger well-established providers.

“I think we’ll benefit from that, it exposes the face that you need to pick your cloud provider wisely,” he said. “You want somebody who’s a stable name and will be around for a while. If you only have cloud storage and no other layered value-added services, it’s not as attractive as having a broad range of things that you can use that storage with.”


October 3, 2013  2:11 PM

Nirvanix closeout: Is there a delete key for exabytes of data?

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

You know the old adage that the last one out should turn off the lights? Well, in the case of  Nirvanix, the last person out needs to delete the petabytes of data stored on its infrastructure.

“My concern is if there is anybody left to deal with the data deletion,” said George Crump, president of analyst firm Storage Switzerland. “I haven’t heard any one talking about this. I don’t know if there will be any employees left to execute that function. Are there enough employees left to reformat the drives? There are no details about what happens on Oct. 16.”

The seven-year-old cloud provider has filed for Chapter 11 bankruptcy and given customers an Oct. 15 deadline to get their data out of their cloud. Typically, Chapter 11 bankruptcy means a company intends to reorganize and recapitalize, but Nirvanix said it was filing to “maximize value for its creditors while continuing its efforts to provide the best possible transition to customers.”

Crump said if Nirvanix’s technology assets are sold at auction “there could be some problems. The assumption is that at some point, somebody will come in and clean things up and that includes the clear destruction of the data. But there are no details about what happens on Oct. 16.”

The Nirvanix high-end platform was designed for millions of users, billions of files and exabytes of data, which helped differentiate its offering from other cloud storage providers. Nirvanix used a geo-diverse namespace to create logical pools across all deployed nodes in a public, hybrid or private cloud implementations.


October 3, 2013  10:51 AM

Object storage can solve large capacity problems

Randy Kerns Randy Kerns Profile: Randy Kerns

There have been reports that a social media firm has purchased 85 PB of object storage from EMC. This is significant for several reasons. Certainly it is a large amount of capacity in a single purchase that should make the salesman and vendor happy. It is also an example of how the major focus of an object storage deployment is on solving a problem involving large amounts of capacity rather than on the underlying technology.

The capacity problem, in this case a service provider environment, is being solved with the new generation of object storage where a RESTful interface such as Amazon’s S3 over HTTP is used to retrieve (get) and store (put) data in the form of objects. Object storage’s flat namespace is another feature used to support massive scaling. There are other characteristics that object storage brings which are covered in Evaluator Group research.

Some preliminary conclusions can be drawn from the reports:

• The massive scale in both capacity and number of objects required for some uses results in massive storage acquisitions. Vendors do not want to miss out on this opportunity. This turns into big money and the opportunity to sell more storage outside of the traditional data center. Additional management software and data protection is an obvious revenue opportunity beyond the storage acquisition.

• Service providers with specialized usages are the early object storage customers. Private clouds or in-house solutions for content repositories, data analytics storage, and archiving will follow.

• As usual, not all major deals will be disclosed. Some customers just won’t be able to be referenced by vendors. One reason for this is that many companies do not want others to know how they are solving problems as a competitive issue.

These conclusions lead to a set of predictions for the future of object storage:

• The frequency of major purchases – meaning multi-petabyte acquisitions – will continue to increase as scaling needs become apparent.

• Vendors will disclose major successes to highlight their “leadership” in the category. The disclosures drive more business with the perception that they must be considered for possible solutions.

• Multiple types of usages will develop over time. Currently, content repositories, archiving, and collaboration solutions are areas where object storage is being applied. Storing of analytics data is another developing use case. There will be more usages and some interesting applications will develop over time.

The storage industry is at the beginning of seeing a new generation of object storage as the solution to massive scaling problems. This will be an interesting area to watch – and to be involved in.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 2, 2013  9:26 AM

Nirvanix files for bankruptcy

Dave Raffo Dave Raffo Profile: Dave Raffo

Failed cloud storage provider Nirvanix has filed for Chapter 11 bankruptcy, according to a statement posted Tuesday on the company’s web site.

Nirvanix also posted a statement saying it is working to make resources available until Oct. 15 to help customers move data off the Nirvanix cloud onto similar clouds from Amazon, IBM, Google or Microsoft. Nirvanix set up a rapid response team with IBM SoftLayer to help ease the transition. IBM sold Nirvanix cloud storage services through an OEM deal.

Interestingly, IBM rival Dell stands to take a big financial hit from Nirvanix’s bankruptcy. Nirvanix owes Dell Marketing LP $407,000, making it the largest unsecured creditor. Other creditors include Salesforce.com, hosting services provider Equnix and analyst firm Gartner. The Nirvanix Chapter 11 filing reported assets of between $10 million and $50 million, and roughly the same amount of liabilities.

The Nirvanix statement said it would “pursue alternatives to maximize value for its creditors while continuing efforts to provide the best possible transition for customers.”


September 27, 2013  3:29 PM

Violin Memory completes IPO, shares drop

Dave Raffo Dave Raffo Profile: Dave Raffo

Violin Memory finished its first day as a public company today, $146 million richer and with a day of disappointing trading behind it.

Violin completed its initial public offering Thursday night, pricing shares on the New York Stock Exchange at $9 – the midway point of its target range – to raise $162 million. By the close of trading today, its share prices were down to $7.11 for a 21% drop.

Violin CEO Don Basile said one thing didn’t change for the flash array vendor today.

“Our strategy doesn’t change,” he said. “Our one true competitor is EMC, and we focus every day on beating EMC.”

What does change is the way people look at Violin now. “It’s a great milestone,” he said. “But once you’re a public company, you have a quarterly scorecard and you have to executive every quarter.”

That scorecard was mixed for Violin as a private company. According to its IPO filing, Violin’s revenues steadily rose, from $11.4 million in 2010 to $73.8 million in 2012 and $51.3 million in the first half of 2013. But losses also mounted – Violin finished 2012 $109.1 million in the red and lost $59.2 million for the first six months of this year.

Basile said Violin will moderate the growth of its sales and marketing teams now to try and shrink those losses.

“We will invest in sales and marketing at the same pace or slightly above pace at what we have done,” he said. “We quadrupled the size of the company the last two years. We don’t have to expand as rapidly.”

As for the drop in share price on the first day of trading, Basile said “we’re focused on the long-term strategy, not what happens on a given day.”

While the strategy focuses on EMC, there is really a lot more competition out there. Violin was the market leader in flash array sales last year according to Gartner, but EMC and other leading storage vendors did not have an all-flash array platform then. Now just about all of them have all-flash systems.

“I attribute a lot of Violin’s success to being early to market,” said Mark Welke, NetApp senior director of product marketing. “They started in 2009 developing a flash array. We started our flash strategy around the same time and went for flash cache initially.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: