The U.S. federal government shutdown and cautious IT spending caused EMC to miss its revenue goals last quarter and lower expectations for the year.
The storage market leader reported revenue of $5.5 billion last quarter — up 5% over last year but $250 million below expectations – and its new guidance for the year of $23.25 billion is below analysts’ expectation of $23.44 billion. EMC executives blamed the shortfall on low U.S. federal government spending and customers outside of government waiting until the end of the quarter to place orders.
On the company’s earnings call today, EMC CEO Joe Tucci said he had “mixed feelings and emotions” about last quarter. “I am disappointed that we missed expectations,” he said. “On the other hand, I feel extremely good about our strategic positioning, products and service offerings. … We feel good that we continue to grow faster than most of our IT peers.”
Tucci said EMC’s federal storage business revenue declined more than 40% over last year — a huge drop because it was the government’s fiscal year-ending quarter, meaning it usually spends more on IT than in any other quarter. Tucci said while that revenue did not go to another storage vendor and the lost deals may not be dead, government budget uncertainties will prevent EMC from making up the shortfall this quarter.
Outside of the government, Tucci said “customer caution and scrutiny of purchases continued,” forcing a backlog of orders that came in on the final day of the quarter. He said EMC received almost $300 million in orders on the final day – three times what it was expecting – and about $100 million of those orders were pushed to this quarter.
Despite that $100 million already on the books, EMC lowered its expectations for this quarter. EMC president David Goulden said he is still expecting a budget flush in the last quarter of the year but not as strong as in most years.
Like Tucci, however, Goulden said EMC is increasing its market share over competitors. “We are doing well relative to the market in all the segments we play in,” he said.”
EMC executives said sales of high-end VMAX arrays took the biggest hit from federal government spending slowdown.
Other tidbits from the call:
- 70% of VMAX systems shipped had some flash storage in them
- The next generation Atmos object storage system due to ship next year will be part of EMC’s Project Nile.
- Around half of the VNX unified storage systems shipped last quarter were the new models launched in August.
- The all-flash XtremIO array is scheduled to ship this quarter.
- EMC will add Hadoop Distributed File System (HDFS) support to ViPR next year
Syncsort is splitting up. The company revealed this week that the data protection business spun off to form a company separate from the data integration business.
The data integration side will keep the Syncsort name. The data protection business will be known as Syncsort Data Protection for now, but will eventually pick a new name for its company. Flavio Santoni, who was Syncsort CEO, will run the data protection business and Lonne Jaffe becomes Syncsort CEO.
Santoni said while the data integration business is bigger than the backup business, he went with the backup team because his background is in storage. Santoni was general manager of LSI’s Engenio storage unit (now owned by NetApp) before becoming Syncsort CEO.
Santoni said the split was planned for more than a year and made sense because the product lines required separate sales, marketing and management teams. He said Syncsort had been run as two separate businesses since the start of this year, and the sales, marketing and engineering teams were independent for four years.
“We’ve been on this path for a while,” he said. “We realized in 2012 that we had a clear path on both businesses. It was time to create two pure-play companies, each with its own management team and investment group.”
The data protection business will be built around the ECX file and snapshot catalog application rolled out last week. The first ECX version works with NetApp and VMware products, but Santoni said the platform will expand.
“The technology is flexible and extensible, and we can extend it to other vendors,” he said. “Catalog is the first service module. Over time, we’ll add backup as another service module and then other service modules.”
One thing that won’t change is the employees’ commute. The companies will continue to share Syncsort’s Woodcliff Lake, N.J. office building, each taking one of its two floors.
Nimble Storage is trying to become the next public storage company after filing its S-1 form for an initial public offering today.
Nimble is right on time with its IPO plans. When the startup raised $40.7 million in funding in September of 2012, CEO Suresh Vasudeven said the goal was to go public between the third quarter of 2013 and the second quarter of 2014.
Nimble, which sells hybrid flash arrays that handle primary storage and data protection, reported $53.8 million in revenue for its fiscal year ending Jan. 31, 2013 and has already generated $50.6 million in the six months of this year that ended July 31. It has more than doubled the $19.1 million in revenue from the first six months of 2012.
However, Nimble is also losing money. It lost $27.9 million last year and $19.9 million in the first six months of this year. The vendor, which began selling its CS arrays in August, 2010, has a total of $77.8 million in losses. It raised $98.7 million in funding, with the last round coming in Sept. 2012.
Nimble wants to raise $150 million with its IPO, according to the filing. All-flash array vendor Violin Memory raised $162 million when it went public last month after reporting $73.8 million in revenue for the year ending Jan. 31 and $51.3 million in the six months ending July 31. Violin’s losses were $109.1 million for last year and $59.2 million over the six months ending July 31.
Violin’s IPO hasn’t been great for investors so far, though. The company sold its original shares at $8 but the price dropped more than 21% to $7.11 on the first day and stood at $7.26 at mid-afternoon today.
I keep hearing this question, and I keep responding “not anytime soon,” rather than with a flat “no.” I can understand why some may consider replacing file storage with object storage given developments that are occurring, but file storage still has plenty of life.
Object storage has several substantial benefits over file storage when it comes to solving a couple of problems:
• Scale – The continuing growth in unstructured data means large capacity and much greater numbers of “objects,” usually files that must be stored and managed. Traditional file systems usually cannot handle the billions of objects in their hierarchical file structure effectively. Object storage uses a flat address space with access through an object ID, which may be maintained external to the storage system or as an integral element in a distributed environment.
• Durability – Information must be available, potentially from different geographies and needs a protection process that does not impact operational environments. Also, information has a long life that usually outlasts the devices it is stored on. Object storage systems, for the most part, have addressed these issues by geographically dispersing data so only a specific number of object elements must be available to access the information. The selectable protection also includes immutability of the objects with versioning so an object incorrectly altered is still recoverable from a prior version. Organizations can manage technology transitions with object storage by introducing a new node/location for storage of an object element, replacing one that is to be retired and automatically redistribute object elements.
Here are ways that object storage is being used:
• Web-based storage is typically implemented with object storage using a simple GET/PUT access method. The most common in usage is some form of the Amazon S3 protocol, now simply referred to as S3 or S3-like.
• On-premise object storage systems, often called private clouds, are being promoted by vendors to give companies the web-based capabilities of self-service and massive scalability for those that have reasons for not using public cloud-based systems. These are usually object storage systems with similar access protocols such as S3.
• Some applications are being modified to write to object storage directly and new applications are being developed with cloud-based access (public or private) as the fundamental design and use of the extended metadata. The most commonly modified applications are backup and archiving software. These applications represent some of the first uses of cloud-based storage.
• Gateway devices that bridge file access to object storage in clouds are available. These systems allow use of cloud storage from existing applications without modification while adding features such as metadata tagging and file caching.
So why do I say object will not replace file storage anytime soon? Because changes in applications occur slowly, if at all. There will be resistance to making a transition from files to objects. Not only is there a predominance of file-based applications, there is also widespread familiarity with using files and file structures. The current use of files has been effective for most environments and change will be embraced only when it is necessary.
Object storage is most likely to solve problems in backup, archive and large content repositories, and those problems are already being solved by current technology. But while object will not replace file usage today, it does deal with the problems of scale and durability, and opens new opportunities for storage in the future.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Like a bunch of other startups, hyperconverged vendor SimpliVity is going after customers who struggle with storage for a virtual desktop infrastructure (VDI).
SimpliVity today disclosed partnerships with graphics card vendor Nvidia and PC over IP (PCoIP) developer Teradici that will enable SimpliVity OmniCube customers to use outside technologies to improve VDI performance. These partnerships do not include OEM or reseller agreements, but SimpliVity will include Nvidia and Teradici products in reference architectures built for VDI storage.
Nvidia has VDI-specific graphics cards and Teradici developed PCoIP that enables offloads compression to hardware chips on the server and remote clients. VMware uses Teradici PCoIP technology in its VMware Horizon View VDI software and Riverbed partners with Teradici to embed PCoIP in its Steelhead WAN optimization appliances to improve VDI performance.
VDI has been a common use case for systems built specifically for virtual machine (VM) storage and storage arrays that use flash. Along with SimpliVity, converged vendors such as Nutanix, Pivot3 and Scale Computing, VM-aware storage vendors Tintri and Tegile, flash array vendors Nimble Storage, Nimbus Data, Pure Storage and software vendors Atlantis and GreenBytes list VDI as a key driver of their business. The larger storage vendors also see VDI as a key storage use case.
“We’ve always enabled VDI,” SimpliVity vice president of marketing Tom Graves said. “VDI is a lot of VMs basically. They’re desktops, but the form factor is a lot of VMs. When you load up a traditional infrastructure with virtual desktops, there’s a lot of complexity. The more we talk to customers, the more we see VDI is a mission critical application.”
Graves said the use of host-based Nvidia and Teradici technology gives SimpliVity an advantage over traditional storage arrays in handling VDI because OmniCube stacks include virtual servers. SimpliVity already has features that should help with VDI such as inline deduplication and support for one-to-one persistent desktops.
The startup will need all these features and partnerships to stand out now that nearly every storage vendor is going after VDI.
Despite the company name, CloudByte’s ElastiStor software isn’t limited to cloud storage. The ZFS-based software provides Quality of Service (QoS) for storage capabilities separately for each application, and can apply to data centers and virtual machine storage as well as clouds.
However, the startup is concentrating on attracting cloud provides with features in ElastiStor 1.2 designed to attract cloud providers.
These features include the ability tt migrate data from other ZFS-based vendors through the ElastiStor console, SAS multipathing and support for high availability failover for Fibre channel storage.
CloudByte CEO Greg Goelz said the changes provide flexibility for cloud providers to change their storage infrastructure and use multiple technologies. The ability to migrate from other ZFS systems does that plus can help CloudByte get a foothold with new service providers.
“If you can leave data in place and move to a new storage controller, than there is truly no vendor lock-in,” CloudByte CEO Greg Goelz said. “We can move people to a cloud-based solution.”
CloudByte recently picked up cloud provider Netmagic as a customer. Shriranga Mulay, Netmatic’s SVP of engineering, said his company was sold on the QoS features and ability to avoid vendor lock-in. Mulay said Netmagic will use CloudByte software to guarantee various performance levels in its storage services.
“We plan to use it for a service where we can guarantee performance,” he said. “We’ll use it in connection with existing services, but to differentiate service levels.”
Verizon Terremark this week launched an object-based storage cloud that will compete directly with Amazon S3, Google Storage Cloud, Microsoft Windows Azure and others.
Verizon Cloud Storage is part of Verizon’s overall cloud compute services. Tom Mays, SVP of data solutions for Verizon Terremark, said the main target customers for the cloud storage will be enterprises and government agencies.
Mays said Verizon is using a commercial object storage software, optimizing it and integrating it into a hardware stack that it built itself. It will support SOAP, REST and the Amazon S3 API. He declined to say whose object software is in the stack, but he said it comes from a commercial vendor and not OpenStack.
Verizon Cloud Storage will also support cloud gateways on the market. Mays said it will soon announce which gateways it will support. The storage cloud is now available as a paid public beta.
Pricing is not yet set but Mays said the service will be priced according to levels of durability. He said it currently can protect against seven simultaneous failures, but so far all of the data is in one of Verizon’s 50 global data center. He said more data centers will be added, and the early roadmap includes a three-site geographic distribution setup.
Verizon is looking to expand service to current backup customers, hoping they will add more sites to store multiple copies of data.
“Until now we’ve lacked an object addressable storage platform,” Mays said, adding that the idea of object storage is transparent to most customers.
“Customers are coming to us saying they just want cheap storage in our data center,” Mays said. “They don’t care if it’s file, block or object. The advantage with object storage is the ability to do more efficient geographic distribution than traditional volume-based parity storage systems.”
Mays said he expects storage pioneer Nirvanix’s sudden demise to help Verizon’s storage cloud, although he said it may be too late to pick up Nirvanix customers. While the Nirvanix issue will prompt companies to re-think the cloud, Mays said it also places greater emphasis on going with larger well-established providers.
“I think we’ll benefit from that, it exposes the face that you need to pick your cloud provider wisely,” he said. “You want somebody who’s a stable name and will be around for a while. If you only have cloud storage and no other layered value-added services, it’s not as attractive as having a broad range of things that you can use that storage with.”
You know the old adage that the last one out should turn off the lights? Well, in the case of Nirvanix, the last person out needs to delete the petabytes of data stored on its infrastructure.
“My concern is if there is anybody left to deal with the data deletion,” said George Crump, president of analyst firm Storage Switzerland. “I haven’t heard any one talking about this. I don’t know if there will be any employees left to execute that function. Are there enough employees left to reformat the drives? There are no details about what happens on Oct. 16.”
The seven-year-old cloud provider has filed for Chapter 11 bankruptcy and given customers an Oct. 15 deadline to get their data out of their cloud. Typically, Chapter 11 bankruptcy means a company intends to reorganize and recapitalize, but Nirvanix said it was filing to “maximize value for its creditors while continuing its efforts to provide the best possible transition to customers.”
Crump said if Nirvanix’s technology assets are sold at auction “there could be some problems. The assumption is that at some point, somebody will come in and clean things up and that includes the clear destruction of the data. But there are no details about what happens on Oct. 16.”
The Nirvanix high-end platform was designed for millions of users, billions of files and exabytes of data, which helped differentiate its offering from other cloud storage providers. Nirvanix used a geo-diverse namespace to create logical pools across all deployed nodes in a public, hybrid or private cloud implementations.
There have been reports that a social media firm has purchased 85 PB of object storage from EMC. This is significant for several reasons. Certainly it is a large amount of capacity in a single purchase that should make the salesman and vendor happy. It is also an example of how the major focus of an object storage deployment is on solving a problem involving large amounts of capacity rather than on the underlying technology.
The capacity problem, in this case a service provider environment, is being solved with the new generation of object storage where a RESTful interface such as Amazon’s S3 over HTTP is used to retrieve (get) and store (put) data in the form of objects. Object storage’s flat namespace is another feature used to support massive scaling. There are other characteristics that object storage brings which are covered in Evaluator Group research.
Some preliminary conclusions can be drawn from the reports:
• The massive scale in both capacity and number of objects required for some uses results in massive storage acquisitions. Vendors do not want to miss out on this opportunity. This turns into big money and the opportunity to sell more storage outside of the traditional data center. Additional management software and data protection is an obvious revenue opportunity beyond the storage acquisition.
• Service providers with specialized usages are the early object storage customers. Private clouds or in-house solutions for content repositories, data analytics storage, and archiving will follow.
• As usual, not all major deals will be disclosed. Some customers just won’t be able to be referenced by vendors. One reason for this is that many companies do not want others to know how they are solving problems as a competitive issue.
These conclusions lead to a set of predictions for the future of object storage:
• The frequency of major purchases – meaning multi-petabyte acquisitions – will continue to increase as scaling needs become apparent.
• Vendors will disclose major successes to highlight their “leadership” in the category. The disclosures drive more business with the perception that they must be considered for possible solutions.
• Multiple types of usages will develop over time. Currently, content repositories, archiving, and collaboration solutions are areas where object storage is being applied. Storing of analytics data is another developing use case. There will be more usages and some interesting applications will develop over time.
The storage industry is at the beginning of seeing a new generation of object storage as the solution to massive scaling problems. This will be an interesting area to watch – and to be involved in.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Failed cloud storage provider Nirvanix has filed for Chapter 11 bankruptcy, according to a statement posted Tuesday on the company’s web site.
Nirvanix also posted a statement saying it is working to make resources available until Oct. 15 to help customers move data off the Nirvanix cloud onto similar clouds from Amazon, IBM, Google or Microsoft. Nirvanix set up a rapid response team with IBM SoftLayer to help ease the transition. IBM sold Nirvanix cloud storage services through an OEM deal.
Interestingly, IBM rival Dell stands to take a big financial hit from Nirvanix’s bankruptcy. Nirvanix owes Dell Marketing LP $407,000, making it the largest unsecured creditor. Other creditors include Salesforce.com, hosting services provider Equnix and analyst firm Gartner. The Nirvanix Chapter 11 filing reported assets of between $10 million and $50 million, and roughly the same amount of liabilities.
The Nirvanix statement said it would “pursue alternatives to maximize value for its creditors while continuing efforts to provide the best possible transition for customers.”