Despite the company name, CloudByte’s ElastiStor software isn’t limited to cloud storage. The ZFS-based software provides Quality of Service (QoS) for storage capabilities separately for each application, and can apply to data centers and virtual machine storage as well as clouds.
However, the startup is concentrating on attracting cloud provides with features in ElastiStor 1.2 designed to attract cloud providers.
These features include the ability tt migrate data from other ZFS-based vendors through the ElastiStor console, SAS multipathing and support for high availability failover for Fibre channel storage.
CloudByte CEO Greg Goelz said the changes provide flexibility for cloud providers to change their storage infrastructure and use multiple technologies. The ability to migrate from other ZFS systems does that plus can help CloudByte get a foothold with new service providers.
“If you can leave data in place and move to a new storage controller, than there is truly no vendor lock-in,” CloudByte CEO Greg Goelz said. “We can move people to a cloud-based solution.”
CloudByte recently picked up cloud provider Netmagic as a customer. Shriranga Mulay, Netmatic’s SVP of engineering, said his company was sold on the QoS features and ability to avoid vendor lock-in. Mulay said Netmagic will use CloudByte software to guarantee various performance levels in its storage services.
“We plan to use it for a service where we can guarantee performance,” he said. “We’ll use it in connection with existing services, but to differentiate service levels.”
Verizon Terremark this week launched an object-based storage cloud that will compete directly with Amazon S3, Google Storage Cloud, Microsoft Windows Azure and others.
Verizon Cloud Storage is part of Verizon’s overall cloud compute services. Tom Mays, SVP of data solutions for Verizon Terremark, said the main target customers for the cloud storage will be enterprises and government agencies.
Mays said Verizon is using a commercial object storage software, optimizing it and integrating it into a hardware stack that it built itself. It will support SOAP, REST and the Amazon S3 API. He declined to say whose object software is in the stack, but he said it comes from a commercial vendor and not OpenStack.
Verizon Cloud Storage will also support cloud gateways on the market. Mays said it will soon announce which gateways it will support. The storage cloud is now available as a paid public beta.
Pricing is not yet set but Mays said the service will be priced according to levels of durability. He said it currently can protect against seven simultaneous failures, but so far all of the data is in one of Verizon’s 50 global data center. He said more data centers will be added, and the early roadmap includes a three-site geographic distribution setup.
Verizon is looking to expand service to current backup customers, hoping they will add more sites to store multiple copies of data.
“Until now we’ve lacked an object addressable storage platform,” Mays said, adding that the idea of object storage is transparent to most customers.
“Customers are coming to us saying they just want cheap storage in our data center,” Mays said. “They don’t care if it’s file, block or object. The advantage with object storage is the ability to do more efficient geographic distribution than traditional volume-based parity storage systems.”
Mays said he expects storage pioneer Nirvanix’s sudden demise to help Verizon’s storage cloud, although he said it may be too late to pick up Nirvanix customers. While the Nirvanix issue will prompt companies to re-think the cloud, Mays said it also places greater emphasis on going with larger well-established providers.
“I think we’ll benefit from that, it exposes the face that you need to pick your cloud provider wisely,” he said. “You want somebody who’s a stable name and will be around for a while. If you only have cloud storage and no other layered value-added services, it’s not as attractive as having a broad range of things that you can use that storage with.”
You know the old adage that the last one out should turn off the lights? Well, in the case of Nirvanix, the last person out needs to delete the petabytes of data stored on its infrastructure.
“My concern is if there is anybody left to deal with the data deletion,” said George Crump, president of analyst firm Storage Switzerland. “I haven’t heard any one talking about this. I don’t know if there will be any employees left to execute that function. Are there enough employees left to reformat the drives? There are no details about what happens on Oct. 16.”
The seven-year-old cloud provider has filed for Chapter 11 bankruptcy and given customers an Oct. 15 deadline to get their data out of their cloud. Typically, Chapter 11 bankruptcy means a company intends to reorganize and recapitalize, but Nirvanix said it was filing to “maximize value for its creditors while continuing its efforts to provide the best possible transition to customers.”
Crump said if Nirvanix’s technology assets are sold at auction “there could be some problems. The assumption is that at some point, somebody will come in and clean things up and that includes the clear destruction of the data. But there are no details about what happens on Oct. 16.”
The Nirvanix high-end platform was designed for millions of users, billions of files and exabytes of data, which helped differentiate its offering from other cloud storage providers. Nirvanix used a geo-diverse namespace to create logical pools across all deployed nodes in a public, hybrid or private cloud implementations.
There have been reports that a social media firm has purchased 85 PB of object storage from EMC. This is significant for several reasons. Certainly it is a large amount of capacity in a single purchase that should make the salesman and vendor happy. It is also an example of how the major focus of an object storage deployment is on solving a problem involving large amounts of capacity rather than on the underlying technology.
The capacity problem, in this case a service provider environment, is being solved with the new generation of object storage where a RESTful interface such as Amazon’s S3 over HTTP is used to retrieve (get) and store (put) data in the form of objects. Object storage’s flat namespace is another feature used to support massive scaling. There are other characteristics that object storage brings which are covered in Evaluator Group research.
Some preliminary conclusions can be drawn from the reports:
• The massive scale in both capacity and number of objects required for some uses results in massive storage acquisitions. Vendors do not want to miss out on this opportunity. This turns into big money and the opportunity to sell more storage outside of the traditional data center. Additional management software and data protection is an obvious revenue opportunity beyond the storage acquisition.
• Service providers with specialized usages are the early object storage customers. Private clouds or in-house solutions for content repositories, data analytics storage, and archiving will follow.
• As usual, not all major deals will be disclosed. Some customers just won’t be able to be referenced by vendors. One reason for this is that many companies do not want others to know how they are solving problems as a competitive issue.
These conclusions lead to a set of predictions for the future of object storage:
• The frequency of major purchases – meaning multi-petabyte acquisitions – will continue to increase as scaling needs become apparent.
• Vendors will disclose major successes to highlight their “leadership” in the category. The disclosures drive more business with the perception that they must be considered for possible solutions.
• Multiple types of usages will develop over time. Currently, content repositories, archiving, and collaboration solutions are areas where object storage is being applied. Storing of analytics data is another developing use case. There will be more usages and some interesting applications will develop over time.
The storage industry is at the beginning of seeing a new generation of object storage as the solution to massive scaling problems. This will be an interesting area to watch – and to be involved in.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Failed cloud storage provider Nirvanix has filed for Chapter 11 bankruptcy, according to a statement posted Tuesday on the company’s web site.
Nirvanix also posted a statement saying it is working to make resources available until Oct. 15 to help customers move data off the Nirvanix cloud onto similar clouds from Amazon, IBM, Google or Microsoft. Nirvanix set up a rapid response team with IBM SoftLayer to help ease the transition. IBM sold Nirvanix cloud storage services through an OEM deal.
Interestingly, IBM rival Dell stands to take a big financial hit from Nirvanix’s bankruptcy. Nirvanix owes Dell Marketing LP $407,000, making it the largest unsecured creditor. Other creditors include Salesforce.com, hosting services provider Equnix and analyst firm Gartner. The Nirvanix Chapter 11 filing reported assets of between $10 million and $50 million, and roughly the same amount of liabilities.
The Nirvanix statement said it would “pursue alternatives to maximize value for its creditors while continuing efforts to provide the best possible transition for customers.”
Violin Memory finished its first day as a public company today, $146 million richer and with a day of disappointing trading behind it.
Violin completed its initial public offering Thursday night, pricing shares on the New York Stock Exchange at $9 – the midway point of its target range – to raise $162 million. By the close of trading today, its share prices were down to $7.11 for a 21% drop.
Violin CEO Don Basile said one thing didn’t change for the flash array vendor today.
“Our strategy doesn’t change,” he said. “Our one true competitor is EMC, and we focus every day on beating EMC.”
What does change is the way people look at Violin now. “It’s a great milestone,” he said. “But once you’re a public company, you have a quarterly scorecard and you have to executive every quarter.”
That scorecard was mixed for Violin as a private company. According to its IPO filing, Violin’s revenues steadily rose, from $11.4 million in 2010 to $73.8 million in 2012 and $51.3 million in the first half of 2013. But losses also mounted – Violin finished 2012 $109.1 million in the red and lost $59.2 million for the first six months of this year.
Basile said Violin will moderate the growth of its sales and marketing teams now to try and shrink those losses.
“We will invest in sales and marketing at the same pace or slightly above pace at what we have done,” he said. “We quadrupled the size of the company the last two years. We don’t have to expand as rapidly.”
As for the drop in share price on the first day of trading, Basile said “we’re focused on the long-term strategy, not what happens on a given day.”
While the strategy focuses on EMC, there is really a lot more competition out there. Violin was the market leader in flash array sales last year according to Gartner, but EMC and other leading storage vendors did not have an all-flash array platform then. Now just about all of them have all-flash systems.
“I attribute a lot of Violin’s success to being early to market,” said Mark Welke, NetApp senior director of product marketing. “They started in 2009 developing a flash array. We started our flash strategy around the same time and went for flash cache initially.”
Don’t expect Brocade to follow its Fibre Channel SAN switch rival Cisco into the world of flash storage arrays.
During Brocade’s Analyst Day Wednesday, Brocade CEO Lloyd Carney said Cisco’s acquisition of all-flash array startup Whiptail was a blunder that should help Brocade increase its lead in FC switching.
“Cisco went off and bought a storage company,” Carney said. “It’s causing people in the storage business to say, ‘Should I partner with them as much as I did in the past?’ We are leveraging that to our benefit. The enemy of my enemy is my friend. They’re making enemies and we’re making friends.”
Most FC switches are sold through OEM deals with storage vendors. Brocade is counting on improved relationships with those storage vendors following Cisco’s move into flash although Cisco insists the Whiptail technology will be used with its UCS server platform rather than as a storage offering.
Brocade tried a similar strategy after it acquired Ethernet switching vendor Foundry in 2008. When Cisco moved into the server market with UCS, Brocade became a closer partner on the Ethernet side with server vendors IBM, Hewlett-Packard (HP) and Dell. The results weren’t what Brocade hoped for as HP and Dell went out and bought their own Ethernet switch companies.
Now Brocade’s main target to grow closer to is EMC, which has historically been a tighter Cisco ally and partners with Cisco in the VCE converged infrastructure alliance. Cisco sells more FC switches through EMC than Brocade does, and Carney is hoping to change that using the Whiptail deal as a catalyst. EMC sells its own XtremIO all-flash array that competes with Whiptail.
Brocade is still trying to cut into Cisco’s market share lead in Ethernet switching, but Carney made it clear that storage is its main focus. While the future of FC has been questioned, Brocade executives said developments such as flash and the cloud will increase the need for storage networking.
“We’re number one in the SAN market, and that’s what we’re known for,” Carney said. “You start worrying about Brocade when people stop buying storage.”
And while Brocade isn’t looking to get into the storage array business, Carney said this is a good time to pick up new technology through acquisitions. “Startups are overfunded with venture capitalist money and every good technology has about two or three companies you can buy,” he said. “It’s a buyer’s market now.”
When the Virtual Computing Environment (VCE) coalition began four years ago, the products its three founding company brought to the table were clearly defined and unique. Cisco contributed servers and Fibre Channel and Ethernet networking, EMC the storage and VMware virtualization.
Since then, the lines have blurred some and Cisco finds itself in competition in some areas with EMC and its subsidiary VMware. That started last year when VMware acquired Nicira to get into software-defined networking. All the companies are vying for cloud customers, and Cisco revealed its intention to acquire flash storage array startup Whiptail two weeks ago.
The Whiptail news preceded the latest Vblock launch by a week. And that launch included a Vblock Specialized System for Extreme Applications, which features EMC XtremIO all-flash array as well as EMC Isilon scale-out NAS. Virtual desktop infrastructure (VDI) storage is the main target for that Vblock.
So, you’re wondering, what happens when Cisco completes the Whiptail acquisition and has its own flash array? Will there be another Specialized System for Extreme Applications configurations including Whiptail?
That’s not even an issue, according to VCE VP of product development strategy Todd Pavone. He said VCE doesn’t consider Whiptail technology as a flash array option, because Cisco doesn’t see it that way.
“Cisco is going to bake Whiptail technology into UCS for server-side flash,” Pavone said. “EMC will continue to stay where it is on the storage side of the market. We will take advantage of Whiptail when it is integrated into the UCS fabric as server-side flash.”
VCE plans to ship a Vblock Specialized System for high-performance databases by the end of this year using EMC VNX or VMAX arrays with FAST auto-tiering software and UCS servers with server-side flash cache. It’s unlikely that any Whiptail technology can be included by then, but it that sounds like a good place to use it eventually.
U.K.-based Aorta Cloud is trying to rescue its soon-to-be-defunct cloud storage provider partner Nirvanix, and has apparently enlisted the help of IBM. The provider is also asking if Nirvanix customers want to pitch in.
Aorta – with the help of its sister company Aorta Capital – has set up a Nirvanix Rescue Package page on its website detailing efforts to save the day for Nirvanix customers. Nirvanix notified its customers earlier this week that they have until the end of September to move their data off its cloud because it is going out of business.
The note on the Aorta web site from CEO Steve Ampleford claims his group has a “seven-figure commitment” to provide liquidity and a bank has agreed to match its investment. He also asked if there were any Nirvanix customers who would like to join his funding effort. The plan is to raise enough money to keep the company going for at least two months, and then try to secure more funding to save it long-term or at least give customers more time to get their data off the Nirvanix cloud. Ampleford said he wanted to consolidate Nirvanix’s seven global data centers into two or three to maintain operations while reducing overhead.
“After all, the hurdles for existing customers to move their data are substantial and if there is any way that we can help Nirvanix continue its operations, this has to be a preferable scenario for those customers to any alternative,” Ampleford wrote in his blog. “ … The technology is robust and solid, the business is credible, the market clearly exists and demand is there. We must be able to find a way forward together.
An update to the site posted today claims “We can now publicly confirm that IBM are an active participant in our attempt to drive forward a solution. More to follow …”
IBM is a Nirvanix OEM partner.
Hewlett-Packard today upgraded its Application Information Optimizer (AIO) archiving application, adding support for the cloud and integration with the HP Intelligent Data Operation Layer (IDOL) search engine.
AIO, which HP acquired when it bought Autonomy in 2011, is used to archive or retire data for performance or regulation purposes. It includes e-discovery and data protection features for information governance. The software accesses, classifies and moves outdated and inactive structured data from production databases and legacy applications onto lower-cost storage. From there, it can be used in other applications or deleted.
The integration with the IDOL search engine lets customers mine structured and unstructured data through AIO. With the new release, HP added support for the HP Cloud and Amazon S3.
“We are adding flexibility on where you can retrieve the information and where customers can put the data, such as the cloud,” said Joe Garber, HP’s vice president of information governance.
IDOL integration gives AIO “Google-like” search so administrators can easily locate data residing on old hardware that is difficult to retrieve because of outdated technology.
AIO now can do searches across multiple IBM DB2 databases via the IDOL search engine. Administrators can use SQL or plain English searches.