Storage Soup


January 29, 2014  7:09 PM

With little growth in sight, EMC plans layoffs

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC’s earnings report Wednesday served as another reminder that storage spending is not growing at anywhere near the rate that data is. With more data spread around on mobile devices and the cloud, the storage model is changing and companies are not rushing out to buy more traditional storage arrays.

EMC’s $6.68 billion revenue last quarter was a big jump from its $5.54 billion in its disappointing third quarter. But EMC’s full year storage revenue of $23.22 billion fell short of the vendor’s original target of $24.5 billion, and its 2014 forecast of $24.5 billion was below Wall Street expectation. EMC executives said industry-wide storage revenue growth continues to slow, and the company announced it will reduce its staff by about 1,000 to cut costs.

The slowdown is clearly more industry-related than specific to EMC. Worldwide storage sales have been down overall this year, and IBM said its storage revenue dropped 13% year-over-year during its earnings call earlier this month.

“We are disappointed we didn’t hit our original goal of $23.5 billion in revenue,” EMC CEO Joe Tucci said, “We are, however, proud that we did hit our $5.5 billion free cash flow goal, grew considerably faster in the markets we serve, and substantially faster than the overall IT market.”

Tucci added that the transition in the IT business now presents “the biggest, most disruptive and yet most opportunistic transition in IT’s 60-plus year history.” However, he and EMC Information Infrastructure CEO Dave Goulden mentioned several times that CIOs are reluctant to buy in this market “We recognize that CIOs right now are being very cautious in their spend,” Goulden said. “We are seeing a little bit of a pause in the market … We are really factoring what we are seeing in the market and the dilemma that the CIOs are facing into our thoughts about how the year might play out. So we are taking a conservative view of IT spending.”

Tucci added that EMC expects the IT market to grow around only 2% this year, and EMC’s guidance represents a 3% gain over 2013.

As for the layoff, Goulden called it a “rebalancing activity” to put EMC’s workforce more in line with the current technology and product landscape. EMC had a similar layoff last May. The company has about 60,000 employees.

“Last year when we did this, we actually wound up with about 2,000 more people at the end of the year when we started off this,” he said. “This year we expect to probably end the year flat or slightly up. Just think of it as rebalancing rather than restructuring.”

January 28, 2014  7:19 AM

Syncsort data protection becomes Catalogic

Dave Raffo Dave Raffo Profile: Dave Raffo

Syncsort Data Protection has an official name, three months since splitting off from the Syncsort data integration company.

The data protection vendor Monday said it is now called Catalogic Software, and adopted the slogan “Catalog, protect, manage” to describe its DPX data protection and EPX catalog management applications.

The data protection spinoff came when part of its management team and new investors acquired that business from Syncsort. Flavio Santoni, who was CEO of Syncsort, is the Catalogic CEO. The rest of the Catalogic management team consists of chief marketing officer John McArthur, CTO Walter Curti, VP of sales Mike Kuehn, senior director of customer support Ira Goodman, and senior director of business development and alliances Bob Sarubbi.

Their goal now is to keep Catalogic from becoming catatonic in a highly competitive data protection market.

 


January 23, 2014  10:31 AM

Impediments to innovation

Randy Kerns Randy Kerns Profile: Randy Kerns

The computer storage industry seems interesting to many on the outside. Fellow engineers I associate with who are in different disciplines often ask pointed questions when we get together. The most consistent questions are why there are so many storage startups, and why don’t the big-name storage companies innovate more so there would not be so many startups.

That is really a long discussion rather than a simple answer. The reason for startups is they are the best vehicles for bright people with great ideas to bring their visions to reality. The fact that big vendors don’t innovate at a level that would eclipse startups is really an indictment of the organization or structure of companies. I’ve worked at a number of these large companies, and I usually relate some examples I’ve experienced when we are in this discussion. It doesn’t take long for my friends to become somewhat disillusioned as to the state of those companies.

The easiest thing to talk about is the set of characters that are impediments to bring an innovative idea to fruition. I’ll name a few and I’m sure anyone who has tried to achieve something inside a big company can add to this with additional examples. Here are a few types:

• The Blockers. These people believe their position is to make all new ideas go through their process and that nothing can advance unless they are satisfied that process has been met – to everyone’s satisfaction. Usually, they set up a series of gates that must be passed, which is really their way of forcing their process be followed. Passing these gates or even contemplating what it takes is enough to drive anyone with a great idea out of the company.

• Diffusers. These people typically don’t understand the idea or the potential value and hide that lack of knowledge by adding many tangential points to a discussion. These additions dilute the value of the good idea, misdirect the conversation, and give credence to another other idea that is not relevant. Diffusers usually know what they are doing and are intentional in their effort to avoid dealing with the knowledge necessary. Or, they are dangerously clueless people.

• Nitpickers – These near-OCD people want to have every detail covered through sales and support when the discussion is at the concept state. They do not understand how to bring forth a new idea with great potential value. They can cause tremendous delays and require a great deal of work be done that is really meaningless because it is being done before the cake is ready to be put into the oven. Nitpickers add little value and create more problems than they solve.

I also frequently raise the issue of how an established company has different requirements than a startup does for bringing a product to a customer. I only have to show a one-inch thick copy of the “Safety Guide” for installing a storage system from a large vendor to make my point.

These impediments make up what I call the “Department of Revenue Prevention,” and drive many of the best and brightest to take their ideas to the startup route. A startup probably will not be ultimately successful for them, and the idea they worked so hard to bring to market may ultimately not be successful. Still, working twice as hard when there is a chance of success is still better than dealing with the institutionalized impediments most large companies put in place.

It is interesting that established companies did not start out that way. They built in these impediments as their organizations grew by adding processes and people to create the blockages. It is also unfortunate. But if you try to change that, there’s someone standing in the way from that happening.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 21, 2014  7:10 PM

Cloud storage needs to be programmable

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Kenneth Hui, open cloud architect at Rackspace, isn’t a big fan of the term software-defined storage — especially when discussing the cloud. He prefers “programmable storage” to describe capacity that is flexible enough to expand and contract-based traffic workloads and the resources needed.

“It’s gotten to the point where software-defined means anything to anybody,” said Hui, during an interview at the Virtualization Technology Users Group (VTUG) last week at Gillette Stadium in Foxboro, MA. “In OpenStack, we have goals to make storage programmable. It’s programmable in the way it’s consumed. I don’t want to go to the storage team to provision storage. It’s managed by a team of cloud administrators and requests are put in by the end users.”

Hui was part of a two-man speaker team (the other was Cody Bunch, a cloud/VMware expert and author who addressed the audience barefooted) that delivered at keynote at VTUG about  cloud principles and bridging the gap between VMware and open-source OpenStack.

“You have to understand, OpenStack is not a virtualization tool,” Hui said. “It’s not a month-long software project. It’s a collection of software projects. OpenStack puts things together to create a cloud platform. It’s a new management layer. It’s one orchestration tool where you spin up enterprise infrastructure from a single pane of glass.”

In the OpenStack cloud world, storage is a part of the overall infrastructure but not a one-size-fits all configuration for every application and traffic workload. OpenStack  Swift is object storage for the pure cloud applications that need to scale to petabytes  of data. Wikipedia, which uses Swift, is an example of a cloud application that requires object storage.

There also is OpenStack Cinder for persistent or block-based storage for high-performance application, while OpenStack Compute uses ephemeral storage that is like a data store that is created on the fly and then deleted when it is no longer needed.

“In cloud, instances are mostly temporary,” Hui said. “You usually spin up an instance to fit a specific requirement and you adjust the resources to fit the workload. Right now, in the traditional data center the storage guy tunes the storage and then hands it to the compute guys.”

The bigger question is whether traditional, monolithic storage will co-exist with t cloud or become marginalized by it. Hui believes in the former rather than the latter.

“It’s like when the open systems guys said, ‘This is the end of mainframes.’ Where are mainframes today?” Hui asked. “Mainframes generally make more revenue today than cloud. There will always be some workloads that will stay on mainframes, and it’s not trivial to move off legacy systems. It’s never going to be trivial. When I talk to storage administrators about cloud storage versus traditional storage, it’s not an either/or conversation. It’s what is the best use case.”


January 20, 2014  3:08 PM

QLogic adopts Brocade’s adapters

Dave Raffo Dave Raffo Profile: Dave Raffo

Brocade last week revealed it is getting out of the adapter business, and it has sold off those products to QLogic.

It’s easy to see why Brocade made this move. Despite being the Fibre Channel switch market leader, its host bus adapter (HBA) and converged network adapter (CNA) products never caught on and it barely made a dent in the market shares of QLogic and its main rival Emulex. Getting rid of that part of the business allows it to focus on its main FC and Ethernet switching.

But what’s in it for QLogic? While the purchase price was low enough the vendors did not have to disclose it, why does QLogic need Brocade’s adapters? It already has competing products for every one of Brocade’s adapters.

There are two advantages for QLogic, according to its director of corporate marketing Tim Lustig. It will pick up about three points of HBA market share and about 12 points of CNA share by acquiring the Brocade products, plus the deal opens the way for better technical cooperation between the two vendors. This deal follows QLogic’s decision last July to stop development of its FC switching products that compete with Brocade.

“QLogic positions this as a strategic relationship,” Lustig said of the acquisition.

The involved products are the Brocade 1860 Fabric Adapters, the 815/825 and 415/425 FC HBAs, 1010/1020 CNAs,  and HBA and CNA mezzanine cards sold by OEM partners. Brocade began selling HBAs in 2010.

Lustig said QLogic will sell and support Brocade’s adapter products, but will not upgrade any of those devices. They will honor Brocade’s OEM deals with IBM, Hewlett-Packard and Dell, which often sell Brocade adapters as lower-cost alternatives to QLogic’s adapters.

“We’re not interested in the technology itself,” Lustig said. “We acquired only the current product lines, and we will be responsible for support of products already sold.”

QLogic will also integrate Brocade’s ClearLink diagnostics technology into its HBAs, following a similar announcement made by Brocade and Emulex last November. QLogic and Brocade have also agreed to align product plans and testing for Gen 5 (16 Gbps) and Gen 6 (32 Gbps) FC technology,  and jointly market next-generation storage area networking (SAN) products.

Lustig said he expects 2014 to be the year when 16-gig FC picks up steam. He said QLogic still gets about 70 percent of its revenue from 8 Gbps FC devices and about 10 percent from 16 Gbps, with most of the rest from 4 Gbps. “The market is just starting to transfer over,” he said. “We think 2014 will be the year for 16-gig.”


January 17, 2014  8:42 AM

SGI buys Starboard assets, engineering but not its arrays

Dave Raffo Dave Raffo Profile: Dave Raffo

The saga of Starboard Storage ended this week when SGI bought the intellectual property of the hybrid unified storage company. SGI will use the technology in its Active Archiving platform, but will not sell Starboard’s storage arrays.

Bob Braham, SGI’s chief marketing officer, said Starboard’s technology can fill in some gaps of SGI’s archiving platform, especially around high availability. “We found requirements that customers had that we were delivering through professional services,” he said. “Starboard mapped to that perfectly. We found the high availability part most interesting.”

Unified storage vendor Reldata re-launched as Starboard Storage in Feb. 2012, adding flash and auto-tiering to its products. Starboard received $13 million in funding a month after the re-launch, but then in March 2013 investors suddenly put the company up for sale and discontinued sales of its arrays. After failing to find a buyer for the entire company, Starboard closed down late last year but continued to pursue an asset sale.

Braham said SGI will keep most of Starboard’s New Jersey-based research and development team, which also brings flash expertise to SGI.

SGI archiving products also include disk spin-down technology acquired from Copan in 2010, and software it picked up from FileTek last October. SGI sells its archiving product as an appliance or software-only. Either way, Braham said, “the real secret sauce is the software. We scan primary storage for data not frequently used and move data onto lower-cost storage.” FileTek software can be used to move archived data to the cloud as well. Braham would not provide specifics on how Starboard’s technology will fit into the archiving products.


January 15, 2014  12:37 PM

Convergence startup Nutanix makes investors hyper, pulls in $101 million in funding

Dave Raffo Dave Raffo Profile: Dave Raffo

Nutanix released numbers this week that establish the startup as the far-away leader in the young hyperconverged storage market. The big news is it closed a massive $101 million funding round, which nearly doubles competitor SimpliVity’s impressive $58 million round from late 2013.

Although Nutanix’s funding round comes up short of all-flash array Pure Storage’s $150 million round from last August, it does raise the startup’s valuation to close to $1 billion. Nutanix also said it has passed $100 million in revenue in two years had has 13 customers who have each spent more than $1 million on its products – impressive numbers for a startup, especially when overall storage sales dipped in 2013.

Nutanix’s Virtual Computing Platform combines storage, servers and hypervisor in one box. The storage includes solid-state as well as hard drive. Its customers include eBay, Toyota and McKesson.

With $172.2 million in total funding and rapid sales acceleration, the round will likely be the last for Nutanix. The startup is weighing options to go public. The money also gives Nutanix a war chest to battle current and new competitors, including VMware.

“We wanted to raise enough to get us to the next major milestone, which is likely an offering in the public markets,”said Howard Ting, Nutanix vice president of marketing. “We also wanted to fuel the business. We’re seeing tremendous demand for our product.”

Ting said the funding will help Nutanix beef up its international sales team. He said the startup has sales presence in at least 20 countries but will look to put more reps in most of them. Around one-third of its sales have come from outside the United States, which is also unusually high for a U.S.-based startup.

Nutanix will also look to expand its products’ capabilities, adding analytics, the ability to connect to the public cloud and customer services. Last year, Nutanix added software deduplication for primary storage and this month went GA with support for Microsoft Hyper-V to go with its VMware and Citrix XenServer support.

Ting said he expects the IPO to come within a few years. “We don’t want to put a timeframe on it,” he said. “We want to build a company of lasting value, and an IPI will be one step in the journey to build the next iconic tech infrastructure company. We want to build the next VMware or NetApp. The IPO is not the end goal for anyone here.”

NetApp and VMware are also competitors for Nutanix, although VMware remains more of a partner than a competitor now. Ting said Nutanix almost always goes against legacy storage vendors such as NetApp, EMC, Hewlett-Packard, Dell and IBM rather than other hyperconverged startups.

VMware is preparing to enter the hyperconverged market with its Virtual SAN (vSAN) software that pools capacity from ESXi hosts. vSAN is in beta, but is seen as a future competitor to the hyperconverged products on the market.

“We appreciate and respect VMware,” Ting said. “But the [vSAN] product’s not ready yet, it’s not even shipping GA. When it does ship, limitations around scalability and ease of use will prevent it from being widely deployed. It will take them a couple of years. And then, how do they deal with the potential conflict with [VMware parent] EMC? When we displace EMC, EMC can’t do anything about it. But when a VMware sales rep sells vSAN instead of EMC VNX or VMAX, how will that work? We see VMware positioning VSAN for VDI and small organizations.”

Riverwood Capital and SAP Ventures led the Nutanix funding round, with Morgan Stanley Expansion Capital and Greenspring Associates participating as new investors.


January 9, 2014  8:55 AM

EMC adds another CEO to its boardroom

Dave Raffo Dave Raffo Profile: Dave Raffo

Joe Tucci found a way to make David Goulden EMC CEO without giving up his own CEO post.

EMC Wednesday named Goulden CEO of EMC Federation, which consists of EMC’s core storage business. Tucci remains chairman and CEO of EMC Corporation, which includes EMC Federation plus EMC-owned VMware and platform-as-a-service startup Pivotal.

Goulden’s promotion probably won’t mean much in terms of his job function. He already served as president and chief operating officer of EMC Federation since July of 2012. He also still performs many of the functions of chief financial officer, a job he held for the previous seven years. That means he was already running most of the major areas of EMC Federation. The promotion does give Goulden experience as CEO, which could help him convince the EMC board that he is ready to take over Tucci’s job when Tucci retires.

Goulden isn’t the only CEO inside EMC primed to replace Tucci, though. VMware CEO Pat Gelsinger and Pivotal CEO Paul Maritz are also candidates, and both are also mentioned as outside candidates as the next Microsoft CEO.

Goulden’s relationship with Tucci pre-dates his 11-year tenure at EMC. They worked together at Wang Corp. before joining the storage giant.

Tucci may shed light on his current retirement and succession plans during EMC’s earnings call later this month. He had announced a few years ago that he would retire at the end of 2012, but he’s still around and EMC in late 2012 extended his contract through Feb. 2015. His replacement is expected to come from within EMC Corp.


January 6, 2014  2:59 PM

Spanning poised to extend cloud-to-cloud backup capabilities

Dave Raffo Dave Raffo Profile: Dave Raffo

Spanning Cloud Apps CEO Jeff Erramouspe predicts 2014 will be a big year for cloud-to-cloud backup. That, of course, would be a good thing for his company, which provides backup for Google Apps and Salesforce.com.

Spanning enters 2014 with a new CEO (Erramouspe replaced founder Charlie Wood Nov. 1), a GA version of Backup for Salesforce due within the next few months and enterprise momentum from its entry in the EMC Select program as a partner of EMC’s Mozy cloud backup software.

Demand is also rising as more companies host key applications in the cloud. “If people are all-in on the cloud and we can do all five of their apps, that puts us in a good position,” Erramouspe said.

So far, Spanning Backup protects two apps. As with its main competitor Backupify, Spanning began backing up Google Apps. That was in 2011. In late 2013, Spanning added a private beta program for Salesforce.

Erramouspe said Spanning is looking to expand to more applications. He said he has been approached by companies in the Salesforce ecosystem, such as cloud CRM vendor Veeva, about building backup for them. But the next major addition will likely be backup for Microsoft Office 365.

“Our big partner [EMC] is interested in that,” Erramouspe said. “They make a lot of money backing up Exchange on premise. They don’t want to lose that revenue stream as the customer goes to the cloud.”

Spanning’s Backup for Google Apps appears on the Google “more” menu, and admins can determine what files they back up. Spanning notifies customers of every file that hasn’t been backed up as well as sync errors that otherwise could go undetected.

Spanning backs up data on Amazon Web Services, storing the files on S3. The company may add the ability to back up cloud apps to an on-site disk appliance this year, although Erramouspe said he has no intention of protecting on-premise apps.

“I don’t ever see is doing on-premise data,” he said. “Our sources are cloud applications.”

Another short-term goal for Spanning Backup is to do restores inside the Salesforce appication, as it does for Google Apps. Today Salesforce restores are done by exporting the data and re-importing it back. “There’s a lot of manual effort involved,” Erramouspe said.

Erramouspe said Spanning has about 3,000 domain customers (including Netflix) on Google Apps, which is about one-quarter the amount of Google domains Backupify claims to protect. The products have slightly different pricing models. Backupify charges a monthly subscription and Spanning requires an annual fee up front. Erramouspe said customers who signed up in 2013 have renewed at about a 96% rate.

Spanning charges $40 per year per user with a 99.9% uptime SLA, and unlimited storage.

Backupify and others storage-based options that charge customers on a per TB or GB basis. Erramouspe said storage-based pricing “doesn’t make a ton of sense, it means we have to keep track of usage. We price per user per year with unlimited storage.”

Other cloud-to-cloud backup competitors include CloudAlly and SysCloud, and Asigra Cloud Backup for service providers can also protect Google Apps and Salesforce.

Perhaps the biggest threat to the cloud-to-cloud backup providers would be if the Software-as-a-Service (SaaS) vendors decided to offer their own built-in backup. But they have shown little interest in that so far. Without a backup app, getting lost data back from a SaaS provider can cost thousands of dollars.

“Google has [Apps] Vault and they’re saying they will extend that to Google Drive, but we haven’t seen it yet,” Erramouspe said. “I am a little bit concerned about that. I don’t think Salesforce wants to deal with it. They offer a restore service today and go back to tapes, but it’s a high price point and takes weeks to happen. They want to get out of that. But even if Google offers backups, what do I do if I can’t get to my Google application?”


January 2, 2014  11:19 AM

The truth about encryption

Randy Kerns Randy Kerns Profile: Randy Kerns

When talking to IT professionals about encryption, I often notice a lack of understanding about information security. It often comes as a surprise that encryption inside of a disk storage system only protects data when someone steals the disk drives out of the system and removes them from the data center.

The main motivation for IT to encrypt data is to meet regulatory requirements. Information such as protected healthcare data (think of patient medical records) must be encrypted because of laws or internal policies. This leads to using storage systems that encrypt the data on the devices in case someone steals the disks and has the skills and perseverance to put the data back together from the different pieces in a RAID group and storage pool.

Without company or regulatory requirements, I do not see wide-scale use of encryption. But if you are looking to encrypt, there are several issues to address.

When encrypting in the disk system, using self-encrypting drives is easy, there is no apparent performance hit and the extra cost is minor. Storage systems that encrypt in the controller are believed to have a performance impact because they use controller processor cycles. In truth, the performance impact varies greatly depending on the implementation.

Another concern regarding encryption within a storage system is the management of keys used to encrypt and decrypt data. Key management within a storage system is transparent to the IT administration. However, exporting keys to an external key manager adds complexity and bureaucracy. The extra complexity is not worth the bother considering how unlikely it is that a disk drive will be stolen from a storage system inside of a data center.

From an information management perspective, encrypting data in the storage system may give a false sense of information protection. The limited scope of the protection may not be clear when someone claims that their data is encrypted. The reality is that the information should be secured at the application level (encryption as part of the application access/creation). The access and identity control are the most important parts. Encrypting data in disk systems is no protection for someone using the application or getting unauthorized access through a server connected to a storage system

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: