Storage Soup


January 20, 2014  3:08 PM

QLogic adopts Brocade’s adapters

Dave Raffo Dave Raffo Profile: Dave Raffo

Brocade last week revealed it is getting out of the adapter business, and it has sold off those products to QLogic.

It’s easy to see why Brocade made this move. Despite being the Fibre Channel switch market leader, its host bus adapter (HBA) and converged network adapter (CNA) products never caught on and it barely made a dent in the market shares of QLogic and its main rival Emulex. Getting rid of that part of the business allows it to focus on its main FC and Ethernet switching.

But what’s in it for QLogic? While the purchase price was low enough the vendors did not have to disclose it, why does QLogic need Brocade’s adapters? It already has competing products for every one of Brocade’s adapters.

There are two advantages for QLogic, according to its director of corporate marketing Tim Lustig. It will pick up about three points of HBA market share and about 12 points of CNA share by acquiring the Brocade products, plus the deal opens the way for better technical cooperation between the two vendors. This deal follows QLogic’s decision last July to stop development of its FC switching products that compete with Brocade.

“QLogic positions this as a strategic relationship,” Lustig said of the acquisition.

The involved products are the Brocade 1860 Fabric Adapters, the 815/825 and 415/425 FC HBAs, 1010/1020 CNAs,  and HBA and CNA mezzanine cards sold by OEM partners. Brocade began selling HBAs in 2010.

Lustig said QLogic will sell and support Brocade’s adapter products, but will not upgrade any of those devices. They will honor Brocade’s OEM deals with IBM, Hewlett-Packard and Dell, which often sell Brocade adapters as lower-cost alternatives to QLogic’s adapters.

“We’re not interested in the technology itself,” Lustig said. “We acquired only the current product lines, and we will be responsible for support of products already sold.”

QLogic will also integrate Brocade’s ClearLink diagnostics technology into its HBAs, following a similar announcement made by Brocade and Emulex last November. QLogic and Brocade have also agreed to align product plans and testing for Gen 5 (16 Gbps) and Gen 6 (32 Gbps) FC technology,  and jointly market next-generation storage area networking (SAN) products.

Lustig said he expects 2014 to be the year when 16-gig FC picks up steam. He said QLogic still gets about 70 percent of its revenue from 8 Gbps FC devices and about 10 percent from 16 Gbps, with most of the rest from 4 Gbps. “The market is just starting to transfer over,” he said. “We think 2014 will be the year for 16-gig.”

January 17, 2014  8:42 AM

SGI buys Starboard assets, engineering but not its arrays

Dave Raffo Dave Raffo Profile: Dave Raffo

The saga of Starboard Storage ended this week when SGI bought the intellectual property of the hybrid unified storage company. SGI will use the technology in its Active Archiving platform, but will not sell Starboard’s storage arrays.

Bob Braham, SGI’s chief marketing officer, said Starboard’s technology can fill in some gaps of SGI’s archiving platform, especially around high availability. “We found requirements that customers had that we were delivering through professional services,” he said. “Starboard mapped to that perfectly. We found the high availability part most interesting.”

Unified storage vendor Reldata re-launched as Starboard Storage in Feb. 2012, adding flash and auto-tiering to its products. Starboard received $13 million in funding a month after the re-launch, but then in March 2013 investors suddenly put the company up for sale and discontinued sales of its arrays. After failing to find a buyer for the entire company, Starboard closed down late last year but continued to pursue an asset sale.

Braham said SGI will keep most of Starboard’s New Jersey-based research and development team, which also brings flash expertise to SGI.

SGI archiving products also include disk spin-down technology acquired from Copan in 2010, and software it picked up from FileTek last October. SGI sells its archiving product as an appliance or software-only. Either way, Braham said, “the real secret sauce is the software. We scan primary storage for data not frequently used and move data onto lower-cost storage.” FileTek software can be used to move archived data to the cloud as well. Braham would not provide specifics on how Starboard’s technology will fit into the archiving products.


January 15, 2014  12:37 PM

Convergence startup Nutanix makes investors hyper, pulls in $101 million in funding

Dave Raffo Dave Raffo Profile: Dave Raffo

Nutanix released numbers this week that establish the startup as the far-away leader in the young hyperconverged storage market. The big news is it closed a massive $101 million funding round, which nearly doubles competitor SimpliVity’s impressive $58 million round from late 2013.

Although Nutanix’s funding round comes up short of all-flash array Pure Storage’s $150 million round from last August, it does raise the startup’s valuation to close to $1 billion. Nutanix also said it has passed $100 million in revenue in two years had has 13 customers who have each spent more than $1 million on its products – impressive numbers for a startup, especially when overall storage sales dipped in 2013.

Nutanix’s Virtual Computing Platform combines storage, servers and hypervisor in one box. The storage includes solid-state as well as hard drive. Its customers include eBay, Toyota and McKesson.

With $172.2 million in total funding and rapid sales acceleration, the round will likely be the last for Nutanix. The startup is weighing options to go public. The money also gives Nutanix a war chest to battle current and new competitors, including VMware.

“We wanted to raise enough to get us to the next major milestone, which is likely an offering in the public markets,”said Howard Ting, Nutanix vice president of marketing. “We also wanted to fuel the business. We’re seeing tremendous demand for our product.”

Ting said the funding will help Nutanix beef up its international sales team. He said the startup has sales presence in at least 20 countries but will look to put more reps in most of them. Around one-third of its sales have come from outside the United States, which is also unusually high for a U.S.-based startup.

Nutanix will also look to expand its products’ capabilities, adding analytics, the ability to connect to the public cloud and customer services. Last year, Nutanix added software deduplication for primary storage and this month went GA with support for Microsoft Hyper-V to go with its VMware and Citrix XenServer support.

Ting said he expects the IPO to come within a few years. “We don’t want to put a timeframe on it,” he said. “We want to build a company of lasting value, and an IPI will be one step in the journey to build the next iconic tech infrastructure company. We want to build the next VMware or NetApp. The IPO is not the end goal for anyone here.”

NetApp and VMware are also competitors for Nutanix, although VMware remains more of a partner than a competitor now. Ting said Nutanix almost always goes against legacy storage vendors such as NetApp, EMC, Hewlett-Packard, Dell and IBM rather than other hyperconverged startups.

VMware is preparing to enter the hyperconverged market with its Virtual SAN (vSAN) software that pools capacity from ESXi hosts. vSAN is in beta, but is seen as a future competitor to the hyperconverged products on the market.

“We appreciate and respect VMware,” Ting said. “But the [vSAN] product’s not ready yet, it’s not even shipping GA. When it does ship, limitations around scalability and ease of use will prevent it from being widely deployed. It will take them a couple of years. And then, how do they deal with the potential conflict with [VMware parent] EMC? When we displace EMC, EMC can’t do anything about it. But when a VMware sales rep sells vSAN instead of EMC VNX or VMAX, how will that work? We see VMware positioning VSAN for VDI and small organizations.”

Riverwood Capital and SAP Ventures led the Nutanix funding round, with Morgan Stanley Expansion Capital and Greenspring Associates participating as new investors.


January 9, 2014  8:55 AM

EMC adds another CEO to its boardroom

Dave Raffo Dave Raffo Profile: Dave Raffo

Joe Tucci found a way to make David Goulden EMC CEO without giving up his own CEO post.

EMC Wednesday named Goulden CEO of EMC Federation, which consists of EMC’s core storage business. Tucci remains chairman and CEO of EMC Corporation, which includes EMC Federation plus EMC-owned VMware and platform-as-a-service startup Pivotal.

Goulden’s promotion probably won’t mean much in terms of his job function. He already served as president and chief operating officer of EMC Federation since July of 2012. He also still performs many of the functions of chief financial officer, a job he held for the previous seven years. That means he was already running most of the major areas of EMC Federation. The promotion does give Goulden experience as CEO, which could help him convince the EMC board that he is ready to take over Tucci’s job when Tucci retires.

Goulden isn’t the only CEO inside EMC primed to replace Tucci, though. VMware CEO Pat Gelsinger and Pivotal CEO Paul Maritz are also candidates, and both are also mentioned as outside candidates as the next Microsoft CEO.

Goulden’s relationship with Tucci pre-dates his 11-year tenure at EMC. They worked together at Wang Corp. before joining the storage giant.

Tucci may shed light on his current retirement and succession plans during EMC’s earnings call later this month. He had announced a few years ago that he would retire at the end of 2012, but he’s still around and EMC in late 2012 extended his contract through Feb. 2015. His replacement is expected to come from within EMC Corp.


January 6, 2014  2:59 PM

Spanning poised to extend cloud-to-cloud backup capabilities

Dave Raffo Dave Raffo Profile: Dave Raffo

Spanning Cloud Apps CEO Jeff Erramouspe predicts 2014 will be a big year for cloud-to-cloud backup. That, of course, would be a good thing for his company, which provides backup for Google Apps and Salesforce.com.

Spanning enters 2014 with a new CEO (Erramouspe replaced founder Charlie Wood Nov. 1), a GA version of Backup for Salesforce due within the next few months and enterprise momentum from its entry in the EMC Select program as a partner of EMC’s Mozy cloud backup software.

Demand is also rising as more companies host key applications in the cloud. “If people are all-in on the cloud and we can do all five of their apps, that puts us in a good position,” Erramouspe said.

So far, Spanning Backup protects two apps. As with its main competitor Backupify, Spanning began backing up Google Apps. That was in 2011. In late 2013, Spanning added a private beta program for Salesforce.

Erramouspe said Spanning is looking to expand to more applications. He said he has been approached by companies in the Salesforce ecosystem, such as cloud CRM vendor Veeva, about building backup for them. But the next major addition will likely be backup for Microsoft Office 365.

“Our big partner [EMC] is interested in that,” Erramouspe said. “They make a lot of money backing up Exchange on premise. They don’t want to lose that revenue stream as the customer goes to the cloud.”

Spanning’s Backup for Google Apps appears on the Google “more” menu, and admins can determine what files they back up. Spanning notifies customers of every file that hasn’t been backed up as well as sync errors that otherwise could go undetected.

Spanning backs up data on Amazon Web Services, storing the files on S3. The company may add the ability to back up cloud apps to an on-site disk appliance this year, although Erramouspe said he has no intention of protecting on-premise apps.

“I don’t ever see is doing on-premise data,” he said. “Our sources are cloud applications.”

Another short-term goal for Spanning Backup is to do restores inside the Salesforce appication, as it does for Google Apps. Today Salesforce restores are done by exporting the data and re-importing it back. “There’s a lot of manual effort involved,” Erramouspe said.

Erramouspe said Spanning has about 3,000 domain customers (including Netflix) on Google Apps, which is about one-quarter the amount of Google domains Backupify claims to protect. The products have slightly different pricing models. Backupify charges a monthly subscription and Spanning requires an annual fee up front. Erramouspe said customers who signed up in 2013 have renewed at about a 96% rate.

Spanning charges $40 per year per user with a 99.9% uptime SLA, and unlimited storage.

Backupify and others storage-based options that charge customers on a per TB or GB basis. Erramouspe said storage-based pricing “doesn’t make a ton of sense, it means we have to keep track of usage. We price per user per year with unlimited storage.”

Other cloud-to-cloud backup competitors include CloudAlly and SysCloud, and Asigra Cloud Backup for service providers can also protect Google Apps and Salesforce.

Perhaps the biggest threat to the cloud-to-cloud backup providers would be if the Software-as-a-Service (SaaS) vendors decided to offer their own built-in backup. But they have shown little interest in that so far. Without a backup app, getting lost data back from a SaaS provider can cost thousands of dollars.

“Google has [Apps] Vault and they’re saying they will extend that to Google Drive, but we haven’t seen it yet,” Erramouspe said. “I am a little bit concerned about that. I don’t think Salesforce wants to deal with it. They offer a restore service today and go back to tapes, but it’s a high price point and takes weeks to happen. They want to get out of that. But even if Google offers backups, what do I do if I can’t get to my Google application?”


January 2, 2014  11:19 AM

The truth about encryption

Randy Kerns Randy Kerns Profile: Randy Kerns

When talking to IT professionals about encryption, I often notice a lack of understanding about information security. It often comes as a surprise that encryption inside of a disk storage system only protects data when someone steals the disk drives out of the system and removes them from the data center.

The main motivation for IT to encrypt data is to meet regulatory requirements. Information such as protected healthcare data (think of patient medical records) must be encrypted because of laws or internal policies. This leads to using storage systems that encrypt the data on the devices in case someone steals the disks and has the skills and perseverance to put the data back together from the different pieces in a RAID group and storage pool.

Without company or regulatory requirements, I do not see wide-scale use of encryption. But if you are looking to encrypt, there are several issues to address.

When encrypting in the disk system, using self-encrypting drives is easy, there is no apparent performance hit and the extra cost is minor. Storage systems that encrypt in the controller are believed to have a performance impact because they use controller processor cycles. In truth, the performance impact varies greatly depending on the implementation.

Another concern regarding encryption within a storage system is the management of keys used to encrypt and decrypt data. Key management within a storage system is transparent to the IT administration. However, exporting keys to an external key manager adds complexity and bureaucracy. The extra complexity is not worth the bother considering how unlikely it is that a disk drive will be stolen from a storage system inside of a data center.

From an information management perspective, encrypting data in the storage system may give a false sense of information protection. The limited scope of the protection may not be clear when someone claims that their data is encrypted. The reality is that the information should be secured at the application level (encryption as part of the application access/creation). The access and identity control are the most important parts. Encrypting data in disk systems is no protection for someone using the application or getting unauthorized access through a server connected to a storage system

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 24, 2013  11:31 AM

EVault sets out to melt Glacier with new archiving cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

EVault this month revealed plans to build an 8 exabyte archiving cloud that will uses more than 500 disks per server and will eventually incorporate its parent Seagate’s Kinetic storage. The cloud, called Long-Term Storage Service (LTS2), is already functional but a blog from EVault VP Mikey Butler makes it clear that LTS2 is still a work in progress.

The EVault version is faster than Amazon’s Glacier but more expensive. It currently costs $15 per TB per month or $0.015 per GB per month compared to Glacier’s price of $0.01 GB per month. But while Glacier may require five hours to retrieve data, EVault says data stored on LTS2 is immediately available. According to the LTF2 web site, data can be accessed with a first byte latency of less than five seconds.

LTS2 can be accessed through OpenStack Object Storage and Amazon S3 APIs, and EVault offers service level agreements (SLAs) for data durability, availability and portability. EVault says the LTS2 cloud distributes objects across disks, storage nodes, data centers and geographical zones. Customers can access the cloud through gateways from TwinStrata, Maldivica or Riverbed.

In his blog laying out the LTS2 mission, Butler wrote that EVault and Seagate have set out to “create the world’s largest, most durable, cost effective, easiest to adopt, disk archival cloud.” He also laid out 10 challenges, which include scaling the cloud to 8 exabytes to meet pricing objectives. His vision for the service includes self-healing drives to minimize downtime, 93% of the drives powered down at any time to reduce power consumption, and a low-touch model that will require only one operator for every 100 racks of equipment.

Butler also writes of the “wonderful technology breakthrough” that Seagate calls its Kinetic Drive. Kinetic Drives use Ethernet and object key values rather than SAS, SATA or SCSI block storage interfaces to connect to applications. Seagate’s goal is to eliminate the storage server tier and enable applications to speak directly to the storage device. Seagate’s roadmap calls for Kinetic drives in mid-2014. Butler did not say how many of the LTS2 design goals will require Kinetic Drives, or when the new archiving cloud will implement those new drives.


December 23, 2013  1:57 PM

Forget flash — Seagate invests in more hard drive technology

Dave Raffo Dave Raffo Profile: Dave Raffo

Seagate did its last-minute Christmas shopping in the U.K., picking up hard drive testing and OEM storage enclosure company Xratex today for $374 million.

Instead of making a big solid-state move like industry people keep waiting for, Seagate instead went deeper into hard drive technology with the U.K.-based Xyratex.

Seagate did not hold any conference call to discuss the deal, but it’s press release said the deal with strengthen its supply and manufacturing chain for disk drives and guarantee access to capital equipment. Seagate intends to run Xryatex as a standalone business.

“As the average capacity per drive increases to multi-terabytes, the time to test these drives increases dramatically,” Seagate VP of operations and technology Dave Mosley said in the release. “Therefore, access to world-class test equipment becomes an increasingly strategic capability.

Xyratex enclosures are used for Dell EqualLogic, Hewlett-Packard 3PAR StoreServ and IBM StoreWize and XIV storage arrays. NetApp recently ended its OEM relationship with Xyratex.

Seagate and its main enterprise rival Western Digital were Xyratex hard drive testing customers. It is unlikely that the Western Digital relationship will continue.

Xyratex in the past two years also began selling ClusterStor high performance computing systems based on the Lustre file system. ClusterStor systems are sold by Cray, Dell, HP and others.

Xryatex is a public company that reported $638 million in revenue through the first nine months of 2013. That was down from $992 million over the same period in 2012, with the end of the NetApp deal causing much of the decline. Xyratex has been profitable this year, but unhappy investors prompted the company to replace CEO Steve Barber with Ernie Sampias in April and called for it to look for a buyer.

Seagate expects the deal to close in the middle of 2014.


December 18, 2013  9:09 PM

IBM to introduce ‘cloud of clouds’ in 2014

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

IBM has developed a software toolkit that allows users to store and move data in multiple public or private clouds through a drag-and-drop method for data protection against service outages and data loss.

The company has code-named the product “Intercloud Storage (ICStore),” and it uses an object storage interface for data migration, backup and file sharing in third party clouds. The toolkit will be available in beta to Storwize customers in January 2014 but the general availability has not been disclosed yet.

IBM’s SoftLayer will be the default cloud but technology will support more than 20 cloud services, including Microsoft Azure, Amazon S3, Rackspace and OpenStack object storage.

“This is an internal research project and we are using clouds to solve a few issues with cloud storage,” said Thomas Weigold, IBM’s manager for storage systems research at its Zurich research laboratory. “It’s a problem with security, reliability and vendor lock in. These are the points where customers have problems. The idea is to use more than one cloud through replication or erasure coding.”

IBM calls this technology “cloud of clouds” and the company has done proof-of-concepts with a select number of customers during the last two years.

Weigold said the toolkit can be customized so that replication or erasure coding can be used, depending on the what the data needs. The toolkit addresses space efficiency, data synchronization and metadata coordination when storing data redundantly on object storage. If one cloud service fails, the other cloud will be available transparently to the user.

Workloads can be positioned for high availability and disaster recovery. For instance, primary data can be stored in a  private cloud while snapshots of the data can be moved to an external public cloud and encrypted.

“You can be very specific,” said Weigold, “such as all files in the directory of this size should be migrated to these providers but not before an application’s copies are replicated.”

The software will have integrated protection features and AES 128, 192, or 256 bit encryption for security.


December 16, 2013  3:04 PM

Flash array vendor Violin sends CEO overboard following post-IPO swoon

Dave Raffo Dave Raffo Profile: Dave Raffo

Struggling flash array vendor Violin Memory today dumped Don Basile, 11 weeks after he took the company public.

Violin chairman Howard Bain III takes over as interim CEO, and the board has hired an executive search firm to find a permanent replacement for Basile.

Violin’s initial public offering (IPO) turned out the beginning of the end for Basile. The company’s stock price dropped from $9 at its IPO to $2.68 this morning, and Violin missed analysts’ expectations in its first quarter as a public company. CTO Jonathan Goldrick resigned last week, and there has been speculation since then that Basile would leave. Violin is also besieged by a rash of investor lawsuits and analyst downgrades because of its poor financial performance last quarter.

Basile became Violin’s CEO in 2009 after serving as CEO of Fusion-io. During his tenure, Violin raised at least $180 million in funding and became the all-flash array market leader – according to Gartner – with $72 million in revenue in 2012.

However, investors and analysts were stunned to learn Basile earned $19 million in 2013 while the company lost more than $90 million through the first three quarters of the year.

Much of Violin’s early sales success came before large storage vendors got into the all-flash market, but all major storage companies now have at least one all-flash platform.

Violin’s press release made it clear that the change in CEO was the board’s choice, and not Basile’s. While the release went into great detail on Bain’s background, it mentioned Basile only once – saying Bain’s new role “follows the decision of the board of directors to terminate Donald Basile.”

David Walrod, chairman of Violin’s nominating and corporate governance committee, was quoted in the release saying “the board believes this leadership change is necessary to enhance the management team’s operational focus and ability to execute the company’s plans for profitable growth.”

Violin lost $34.1 million last quarter on revenue of $28.3 million.

Bain has been on Violin’s board since October 2012 and became chairman in August. He has been CFO of Symantec, Informix, Portal Software and Vicinity.

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: