Storage Soup


January 15, 2014  12:37 PM

Convergence startup Nutanix makes investors hyper, pulls in $101 million in funding

Dave Raffo Dave Raffo Profile: Dave Raffo

Nutanix released numbers this week that establish the startup as the far-away leader in the young hyperconverged storage market. The big news is it closed a massive $101 million funding round, which nearly doubles competitor SimpliVity’s impressive $58 million round from late 2013.

Although Nutanix’s funding round comes up short of all-flash array Pure Storage’s $150 million round from last August, it does raise the startup’s valuation to close to $1 billion. Nutanix also said it has passed $100 million in revenue in two years had has 13 customers who have each spent more than $1 million on its products – impressive numbers for a startup, especially when overall storage sales dipped in 2013.

Nutanix’s Virtual Computing Platform combines storage, servers and hypervisor in one box. The storage includes solid-state as well as hard drive. Its customers include eBay, Toyota and McKesson.

With $172.2 million in total funding and rapid sales acceleration, the round will likely be the last for Nutanix. The startup is weighing options to go public. The money also gives Nutanix a war chest to battle current and new competitors, including VMware.

“We wanted to raise enough to get us to the next major milestone, which is likely an offering in the public markets,”said Howard Ting, Nutanix vice president of marketing. “We also wanted to fuel the business. We’re seeing tremendous demand for our product.”

Ting said the funding will help Nutanix beef up its international sales team. He said the startup has sales presence in at least 20 countries but will look to put more reps in most of them. Around one-third of its sales have come from outside the United States, which is also unusually high for a U.S.-based startup.

Nutanix will also look to expand its products’ capabilities, adding analytics, the ability to connect to the public cloud and customer services. Last year, Nutanix added software deduplication for primary storage and this month went GA with support for Microsoft Hyper-V to go with its VMware and Citrix XenServer support.

Ting said he expects the IPO to come within a few years. “We don’t want to put a timeframe on it,” he said. “We want to build a company of lasting value, and an IPI will be one step in the journey to build the next iconic tech infrastructure company. We want to build the next VMware or NetApp. The IPO is not the end goal for anyone here.”

NetApp and VMware are also competitors for Nutanix, although VMware remains more of a partner than a competitor now. Ting said Nutanix almost always goes against legacy storage vendors such as NetApp, EMC, Hewlett-Packard, Dell and IBM rather than other hyperconverged startups.

VMware is preparing to enter the hyperconverged market with its Virtual SAN (vSAN) software that pools capacity from ESXi hosts. vSAN is in beta, but is seen as a future competitor to the hyperconverged products on the market.

“We appreciate and respect VMware,” Ting said. “But the [vSAN] product’s not ready yet, it’s not even shipping GA. When it does ship, limitations around scalability and ease of use will prevent it from being widely deployed. It will take them a couple of years. And then, how do they deal with the potential conflict with [VMware parent] EMC? When we displace EMC, EMC can’t do anything about it. But when a VMware sales rep sells vSAN instead of EMC VNX or VMAX, how will that work? We see VMware positioning VSAN for VDI and small organizations.”

Riverwood Capital and SAP Ventures led the Nutanix funding round, with Morgan Stanley Expansion Capital and Greenspring Associates participating as new investors.

January 9, 2014  8:55 AM

EMC adds another CEO to its boardroom

Dave Raffo Dave Raffo Profile: Dave Raffo

Joe Tucci found a way to make David Goulden EMC CEO without giving up his own CEO post.

EMC Wednesday named Goulden CEO of EMC Federation, which consists of EMC’s core storage business. Tucci remains chairman and CEO of EMC Corporation, which includes EMC Federation plus EMC-owned VMware and platform-as-a-service startup Pivotal.

Goulden’s promotion probably won’t mean much in terms of his job function. He already served as president and chief operating officer of EMC Federation since July of 2012. He also still performs many of the functions of chief financial officer, a job he held for the previous seven years. That means he was already running most of the major areas of EMC Federation. The promotion does give Goulden experience as CEO, which could help him convince the EMC board that he is ready to take over Tucci’s job when Tucci retires.

Goulden isn’t the only CEO inside EMC primed to replace Tucci, though. VMware CEO Pat Gelsinger and Pivotal CEO Paul Maritz are also candidates, and both are also mentioned as outside candidates as the next Microsoft CEO.

Goulden’s relationship with Tucci pre-dates his 11-year tenure at EMC. They worked together at Wang Corp. before joining the storage giant.

Tucci may shed light on his current retirement and succession plans during EMC’s earnings call later this month. He had announced a few years ago that he would retire at the end of 2012, but he’s still around and EMC in late 2012 extended his contract through Feb. 2015. His replacement is expected to come from within EMC Corp.


January 6, 2014  2:59 PM

Spanning poised to extend cloud-to-cloud backup capabilities

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Spanning Cloud Apps CEO Jeff Erramouspe predicts 2014 will be a big year for cloud-to-cloud backup. That, of course, would be a good thing for his company, which provides backup for Google Apps and Salesforce.com.

Spanning enters 2014 with a new CEO (Erramouspe replaced founder Charlie Wood Nov. 1), a GA version of Backup for Salesforce due within the next few months and enterprise momentum from its entry in the EMC Select program as a partner of EMC’s Mozy cloud backup software.

Demand is also rising as more companies host key applications in the cloud. “If people are all-in on the cloud and we can do all five of their apps, that puts us in a good position,” Erramouspe said.

So far, Spanning Backup protects two apps. As with its main competitor Backupify, Spanning began backing up Google Apps. That was in 2011. In late 2013, Spanning added a private beta program for Salesforce.

Erramouspe said Spanning is looking to expand to more applications. He said he has been approached by companies in the Salesforce ecosystem, such as cloud CRM vendor Veeva, about building backup for them. But the next major addition will likely be backup for Microsoft Office 365.

“Our big partner [EMC] is interested in that,” Erramouspe said. “They make a lot of money backing up Exchange on premise. They don’t want to lose that revenue stream as the customer goes to the cloud.”

Spanning’s Backup for Google Apps appears on the Google “more” menu, and admins can determine what files they back up. Spanning notifies customers of every file that hasn’t been backed up as well as sync errors that otherwise could go undetected.

Spanning backs up data on Amazon Web Services, storing the files on S3. The company may add the ability to back up cloud apps to an on-site disk appliance this year, although Erramouspe said he has no intention of protecting on-premise apps.

“I don’t ever see is doing on-premise data,” he said. “Our sources are cloud applications.”

Another short-term goal for Spanning Backup is to do restores inside the Salesforce appication, as it does for Google Apps. Today Salesforce restores are done by exporting the data and re-importing it back. “There’s a lot of manual effort involved,” Erramouspe said.

Erramouspe said Spanning has about 3,000 domain customers (including Netflix) on Google Apps, which is about one-quarter the number of Google domains Backupify claims to protect. The products have slightly different pricing models. Backupify charges a monthly subscription and Spanning requires an annual fee up front. Erramouspe said customers who signed up in 2013 have renewed at about a 96% rate.

Spanning charges $40 per year per user with a 99.9% uptime SLA, and unlimited storage.

Backupify and others storage-based options that charge customers on a per TB or GB basis. Erramouspe said storage-based pricing “doesn’t make a ton of sense, it means we have to keep track of usage. We price per user per year with unlimited storage.”

Other cloud-to-cloud backup competitors include CloudAlly and SysCloud, and Asigra Cloud Backup for service providers can also protect Google Apps and Salesforce.

Perhaps the biggest threat to the cloud-to-cloud backup providers would be if the Software-as-a-Service (SaaS) vendors decided to offer their own built-in backup. But they have shown little interest in that so far. Without a backup app, getting lost data back from a SaaS provider can cost thousands of dollars.

“Google has [Apps] Vault and they’re saying they will extend that to Google Drive, but we haven’t seen it yet,” Erramouspe said. “I am a little bit concerned about that. I don’t think Salesforce wants to deal with it. They offer a restore service today and go back to tapes, but it’s a high price point and takes weeks to happen. They want to get out of that. But even if Google offers backups, what do I do if I can’t get to my Google application?”


January 2, 2014  11:19 AM

The truth about encryption

Randy Kerns Randy Kerns Profile: Randy Kerns

When talking to IT professionals about encryption, I often notice a lack of understanding about information security. It often comes as a surprise that encryption inside of a disk storage system only protects data when someone steals the disk drives out of the system and removes them from the data center.

The main motivation for IT to encrypt data is to meet regulatory requirements. Information such as protected healthcare data (think of patient medical records) must be encrypted because of laws or internal policies. This leads to using storage systems that encrypt the data on the devices in case someone steals the disks and has the skills and perseverance to put the data back together from the different pieces in a RAID group and storage pool.

Without company or regulatory requirements, I do not see wide-scale use of encryption. But if you are looking to encrypt, there are several issues to address.

When encrypting in the disk system, using self-encrypting drives is easy, there is no apparent performance hit and the extra cost is minor. Storage systems that encrypt in the controller are believed to have a performance impact because they use controller processor cycles. In truth, the performance impact varies greatly depending on the implementation.

Another concern regarding encryption within a storage system is the management of keys used to encrypt and decrypt data. Key management within a storage system is transparent to the IT administration. However, exporting keys to an external key manager adds complexity and bureaucracy. The extra complexity is not worth the bother considering how unlikely it is that a disk drive will be stolen from a storage system inside of a data center.

From an information management perspective, encrypting data in the storage system may give a false sense of information protection. The limited scope of the protection may not be clear when someone claims that their data is encrypted. The reality is that the information should be secured at the application level (encryption as part of the application access/creation). The access and identity control are the most important parts. Encrypting data in disk systems is no protection for someone using the application or getting unauthorized access through a server connected to a storage system

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 24, 2013  11:31 AM

EVault sets out to melt Glacier with new archiving cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

EVault this month revealed plans to build an 8 exabyte archiving cloud that will uses more than 500 disks per server and will eventually incorporate its parent Seagate’s Kinetic storage. The cloud, called Long-Term Storage Service (LTS2), is already functional but a blog from EVault VP Mikey Butler makes it clear that LTS2 is still a work in progress.

The EVault version is faster than Amazon’s Glacier but more expensive. It currently costs $15 per TB per month or $0.015 per GB per month compared to Glacier’s price of $0.01 GB per month. But while Glacier may require five hours to retrieve data, EVault says data stored on LTS2 is immediately available. According to the LTF2 web site, data can be accessed with a first byte latency of less than five seconds.

LTS2 can be accessed through OpenStack Object Storage and Amazon S3 APIs, and EVault offers service level agreements (SLAs) for data durability, availability and portability. EVault says the LTS2 cloud distributes objects across disks, storage nodes, data centers and geographical zones. Customers can access the cloud through gateways from TwinStrata, Maldivica or Riverbed.

In his blog laying out the LTS2 mission, Butler wrote that EVault and Seagate have set out to “create the world’s largest, most durable, cost effective, easiest to adopt, disk archival cloud.” He also laid out 10 challenges, which include scaling the cloud to 8 exabytes to meet pricing objectives. His vision for the service includes self-healing drives to minimize downtime, 93% of the drives powered down at any time to reduce power consumption, and a low-touch model that will require only one operator for every 100 racks of equipment.

Butler also writes of the “wonderful technology breakthrough” that Seagate calls its Kinetic Drive. Kinetic Drives use Ethernet and object key values rather than SAS, SATA or SCSI block storage interfaces to connect to applications. Seagate’s goal is to eliminate the storage server tier and enable applications to speak directly to the storage device. Seagate’s roadmap calls for Kinetic drives in mid-2014. Butler did not say how many of the LTS2 design goals will require Kinetic Drives, or when the new archiving cloud will implement those new drives.


December 23, 2013  1:57 PM

Forget flash — Seagate invests in more hard drive technology

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Seagate did its last-minute Christmas shopping in the U.K., picking up hard drive testing and OEM storage enclosure company Xyratex today for $374 million.

Instead of making a big solid-state move like industry people keep waiting for, Seagate instead went deeper into hard drive technology with Xyratex.

Seagate did not hold a conference call to discuss the deal, but it’s press release said the deal will strengthen its supply and manufacturing chain for disk drives and guarantee access to capital equipment. Seagate intends to run Xryatex as a standalone business.

“As the average capacity per drive increases to multi-terabytes, the time to test these drives increases dramatically,” Seagate VP of operations and technology Dave Mosley said in the release. “Therefore, access to world-class test equipment becomes an increasingly strategic capability.

Xyratex enclosures are used for Dell EqualLogic, Hewlett-Packard 3PAR StoreServ and IBM StoreWize and XIV storage arrays. NetApp recently ended its OEM relationship with Xyratex.

Seagate and its main enterprise rival Western Digital were Xyratex hard drive testing customers. It is unlikely that the Western Digital relationship will continue.

Xyratex in the past two years also began selling ClusterStor high performance computing systems based on the Lustre file system. ClusterStor systems are sold by Cray, Dell, HP and others.

Xryatex is a public company that reported $638 million in revenue through the first nine months of 2013. That was down from $992 million over the same period in 2012, with the end of the NetApp deal causing much of the decline. Xyratex has been profitable this year, but unhappy investors prompted the company to replace CEO Steve Barber with Ernie Sampias in April and called for it to look for a buyer.

Seagate expects the deal to close in the middle of 2014.


December 18, 2013  9:09 PM

IBM to introduce ‘cloud of clouds’ in 2014

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

IBM has developed a software toolkit that allows users to store and move data in multiple public or private clouds through a drag-and-drop method for data protection against service outages and data loss.

The company has code-named the product “Intercloud Storage (ICStore),” and it uses an object storage interface for data migration, backup and file sharing in third party clouds. The toolkit will be available in beta to Storwize customers in January 2014 but the general availability has not been disclosed yet.

IBM’s SoftLayer will be the default cloud but technology will support more than 20 cloud services, including Microsoft Azure, Amazon S3, Rackspace and OpenStack object storage.

“This is an internal research project and we are using clouds to solve a few issues with cloud storage,” said Thomas Weigold, IBM’s manager for storage systems research at its Zurich research laboratory. “It’s a problem with security, reliability and vendor lock in. These are the points where customers have problems. The idea is to use more than one cloud through replication or erasure coding.”

IBM calls this technology “cloud of clouds” and the company has done proof-of-concepts with a select number of customers during the last two years.

Weigold said the toolkit can be customized so that replication or erasure coding can be used, depending on the what the data needs. The toolkit addresses space efficiency, data synchronization and metadata coordination when storing data redundantly on object storage. If one cloud service fails, the other cloud will be available transparently to the user.

Workloads can be positioned for high availability and disaster recovery. For instance, primary data can be stored in a  private cloud while snapshots of the data can be moved to an external public cloud and encrypted.

“You can be very specific,” said Weigold, “such as all files in the directory of this size should be migrated to these providers but not before an application’s copies are replicated.”

The software will have integrated protection features and AES 128, 192, or 256 bit encryption for security.


December 16, 2013  3:04 PM

Flash array vendor Violin sends CEO overboard following post-IPO swoon

Dave Raffo Dave Raffo Profile: Dave Raffo

Struggling flash array vendor Violin Memory today dumped Don Basile, 11 weeks after he took the company public.

Violin chairman Howard Bain III takes over as interim CEO, and the board has hired an executive search firm to find a permanent replacement for Basile.

Violin’s initial public offering (IPO) turned out the beginning of the end for Basile. The company’s stock price dropped from $9 at its IPO to $2.68 this morning, and Violin missed analysts’ expectations in its first quarter as a public company. CTO Jonathan Goldrick resigned last week, and there has been speculation since then that Basile would leave. Violin is also besieged by a rash of investor lawsuits and analyst downgrades because of its poor financial performance last quarter.

Basile became Violin’s CEO in 2009 after serving as CEO of Fusion-io. During his tenure, Violin raised at least $180 million in funding and became the all-flash array market leader – according to Gartner – with $72 million in revenue in 2012.

However, investors and analysts were stunned to learn Basile earned $19 million in 2013 while the company lost more than $90 million through the first three quarters of the year.

Much of Violin’s early sales success came before large storage vendors got into the all-flash market, but all major storage companies now have at least one all-flash platform.

Violin’s press release made it clear that the change in CEO was the board’s choice, and not Basile’s. While the release went into great detail on Bain’s background, it mentioned Basile only once – saying Bain’s new role “follows the decision of the board of directors to terminate Donald Basile.”

David Walrod, chairman of Violin’s nominating and corporate governance committee, was quoted in the release saying “the board believes this leadership change is necessary to enhance the management team’s operational focus and ability to execute the company’s plans for profitable growth.”

Violin lost $34.1 million last quarter on revenue of $28.3 million.

Bain has been on Violin’s board since October 2012 and became chairman in August. He has been CFO of Symantec, Informix, Portal Software and Vicinity.

 

 


December 16, 2013  10:42 AM

Nimble Storage IPO receives warm Wall Street reception

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage showed that storage vendors can receive a favorable reception on Wall Street when it completed a successful initial public offering Friday. Nimble’s debut comes on the heels of all-flash vendor Violin Memory’s rocky IPO. Violin’s stock price began tanking on the first day and has continued to fall for nearly three months.

Nimble’s price opened at $21 Friday – above its target range of $18 to $20 – and closed the day at $31.10 for a valuation of $1.48 billion. It opened at $34.90 today. In September, Violin priced its IPO at $9, finished its first day at $7.11, and kept dropping after reporting poor results in its first quarter as a public company. Violin opened today at $2.68.

A lot of attention on Nimble’s pre-IPO compared it to Violin and server-based flash vendor Fusion-io, whose stock price has also dropped sharply since its 2011 IPO. But Nimble is in a different situation than those two. First, it doesn’t sell only flash. Its arrays are hybrid, combining a small amount of flash with spinning disk. And Nimble’s competitive situation hasn’t changed over the past year. In contrast, Violin and Fusion-io beat the major vendors to market with their flash products but now face a glut of competition.

Nimble stepped into the market dominated by EMC, NetApp and others in 2010 and has grown significantly with $84 million in revenue over the first nine months of 2013. While the major vendors now all have all-flash arrays to compete with Violin and other startups, they don’t have any new products that counter Nimble’s strengths. Nimble has been successful – although not yet profitable – with an architecture that combines data reduction, ease of management and advanced cloud-based data monitoring and analytics.

And yes, it has taken advantage of flash but doesn’t rely completely on it.

“These days you can’t talk about storage without mentioning flash,” said Radhika Krishnan, Nimble’s vice president of solutions and alliances.

Still, she says storage doesn’t have to be all-flash to pump out significant performance and latency advantages over spinning disk. Nimble improves performance of solid-state drives (SSDs) with data reduction, efficient snapshots and an architecture that minimizes the amount of writes that go to flash.

“If you have a hybrid system with high performance and predictable latency, why would you want to pay all that money for all-flash arrays or flash on servers?” Krishnan said. “We think hybrid arrays are the way to go, not just in 2014 but for a significant time to come. Not all hybrid arrays are created equal. Any hybrid arrays that use tiering where all writes get staged on flash have to pay the price of write endurance. Then you have to protect your flash with RAID and over-provisioning.”

Nimble’s challenge now is to show it can spin that technology into a profitable company if it is to stay around in a flash-dominated storage world.


December 13, 2013  11:26 AM

HP shines up its converged storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Over the past year, Hewlett-Packard has made it clear that its storage future revolves around what it calls its converged storage platforms – the 3PAR StoreServ SAN array, StoreOnce data deduplication backup appliance, and StoreAll arching system. These products now make up more than 40% of HP’s storage revenue, and will account for most of its storage revenue in 2014.

So it’s no surprise that these were the products HP upgraded during HP Discover Barcelona this week. The enhancements were mostly speeds and feeds, plus improved quality of service on the 3PAR platform.

The new Priority Optimization feature in 3PAR StoreServ OS 3.1.3 allows managers to set thresholds for IOPS, bandwidth and latency for each application or tenant in a multi-tenant system. That enables QoS for application or virtual machines that can be managed in real-time.

The latest OS also supports more snapshots, volumes, replicated volumes and Fibre Channel host initiators, and all 3PAR arrays now support 800 GB solid-state drives (SSDs).

HP also added Adaptive Sparing that optimizes flash overprovisioning. This feature takes some blocks reserved for when other blocks wear out and uses them for allocated spare chunks in the storage pooling architecture. HP claims this expands capacity of each SSD, so an 800 GB SSD actually provides 920 GB of capacity.

For StoreOnce, HP added the 6500 model, which is its highest-capacity StoreOnce system with 1.7 PB of maximum capacity. A new StoreOnce Security Pack handles encryption at the application level, and StoreOnce Catalyst dedupe acceleration software now supports Oracle Recoveyr Manager (RMAN).

There are two new StoreAll systems – an 8200 Gateway that supports 3PAR StorServ on the back-end and a high-end 8800 model. The Gateway brings file and object storage to the 3PAR platform. The 8800 scales to 1.5 PB per rack and 16 PB in a cluster.

HP also added native support for OpenStack Object Storage so customers can take applications developed in the cloud and move them in-house.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: