Storage Soup


July 15, 2014  3:27 PM

Private clouds signal a change in what we call IT

Randy Kerns Randy Kerns Profile: Randy Kerns
Public Cloud, Storage

We have been working with IT clients who deploy private clouds as part of their operations. The reasons include implementing a change in the delivery of IT services, a new infrastructure for dealing with the overwhelming influx of unstructured data, and a way to deploy and support new applications developed for mobile technology. Each of these reasons makes sense and can fit into an economic model given the projected demands.

The types of private clouds also vary. The simplest we have seen from IT clients is a private cloud that is an object storage system on premise used as a content repository. The most common types of these are:

• A large-scale repository that used by new or modified applications that must deal with large amounts of unstructured data.
• A large, online storage area where data is moved off primary (or secondary) storage to less expensive storage with different data protection characteristics.
• As the repository of data from external sources used for analytics. This data may not require long-term retention.

Most of these operations have built-in the ability to use public cloud resources and they use the term hybrid-cloud. Public cloud use may be in the form of long-term archive of data (deep archive) where the access time is much more relaxed, as sharable information in the form of file sync and share implementations, or as an elastic storage area to handle large influxes that may not be retained for extended time periods. Usually there is a mechanism to handle the transfer and security of data to and from public cloud. This is usually done by gateway devices or software. Storage system vendors are starting to provide built-in storage gateways to manage data movement to the cloud.

These are all justifiable reasons and usages for traditional IT operations to deploy private clouds. But what do you call IT that has changed operations to achieve this? Continuing to use the IT term is simple but it does not convey the fact that IT has fundamentally transformed operations and the value provided. IT as a Service (ITaaS), which is the outcome goal of most transformations, is a significant value and is different from IT of the past. The term cloud is an ambiguous identity used in many different contexts and probably will not have staying power over time.

Historically, what is known as IT has changed names over time, representing major transitions in the industry. Many remember DP or Data Processing as the term for centralized IT of the past. Some of us even go back to an earlier point when the term was EAM, which is an artifact acronym for Electronic Adding Machine. There was also the term IS, which stood for Information Systems and later Information Services. Information Technology is the current broadly accepted definition for business, but a change is in order with the services and capability transitions underway.

What should the new identity be? Maybe there should be a naming contest. Probably the worst thing would be for one vendor’s marketing organization to drive its descriptive name. The new name in this case would be to promote that vendor’s vision and how its products serve the necessary requirement. In this case, there may be a full court press for that identity with paid professionals promoting the name. The new identity needs to convey the data center transformation that has occurred. Hopefully, the name will not have “cloud” in it.

The name change will occur and probably start as one thing and quickly evolve. A few years from now, it will be commonplace. The identity change is a step in the logical progression of computing services (keeping in mind the value is in deriving information from data). Terms such as client-server will become footnotes in the history of the industry. Other ideas that were detours down a rough road will seem like another learning experience everyone had to go through. This is an interesting period to see transitions occurring.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

July 14, 2014  12:35 PM

Box raises $150 million but can it overcome the Snowden effect?

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Box, Dropbox, Storage

Online storage provider Box last week raised another $150 million from two investment firms, pushing the total amount of its funding to $564 million as it prepares to go public.

While it can be seen as a good sign for the vendor that it can raise that much money in one round, you also have to wonder about the long-term viability of a company that has gone over $500 million in funding and has yet to break even.

The Los Altos, California-based company also got a vote of support from research firm Gartner, which named Box a leader in the enterprise file synchronization and sharing market along with Citrix, EMC and Accellion. It’s definitely a milestone for a company to rise to the top in a crowded market that initially purported to have more than 50 startups.

Box, however, remains unprofitable with a reported $168 million loss for the past year. It has an expensive business model, with a high burn rate as it tries to convert users who are lured into the service with free storage to paying customers.

Box also has to mature beyond basic  sync-and-share , which appears to be turning into a feature more than a full-blown product as companies integrate the technology into other cloud products. Box has opportunities in integrating their technology with other enterprise applications, such as Salesforce.com

“The basic sync-and-share is a feature,” said Terri McClure, senior analyst at Enterprise Strategy Group. “[Box’s] advanced collaboration and data management is starting to become more compelling. They have a rich API set that allows integration into enterprise applications. They certainly are one of the market leaders in terms of functionality. Dropbox has an API strategy but it’s not as far along.”

McClure said that although Box faces the tough and expensive challenge of converting free users into paying customers, the company is making strong gains in that area. Seven percent of its 25 million users are now paying customers, translating to 1.75 million users. It also has 34,000 companies paying for accounts.

“It’s likely that a good number of those 1.75 million users are corporate users,” McClure said. “When (you are) seeing losses of $168 million against revenue of $124 million, it is easy to point fingers and call it questionable. But can this model work? Yes, over time and with the right investments.”

She said Box needs to build some on-premise functionality into its product to move into the enterprise and will have to make heavy investments in security. McClure said the company needs a European data center.

“Today, Box stores all customer data and files in the United States,” McClure wrote in a research brief. “Box did not report a geographic revenue breakout in the S1 but given the geopolitical and regulatory environment, companies outside the U.S. are hesitant to begin or are prohibited from storing data in the U.S. Cisco and IBM have discussed the fact that security concerns (specific to NSA spying) are inhibiting international sales of hardware. So you can say it’s impacting cloud SaaS and storage.”

Cloud companies also are dealing with what McClure calls the “Snowden effect.” Users are concerned that if a cloud provider holds their data and the encryption keys, then the data can be turned over to the federal government if it is subpoenaed.

“Concerns would be mitigated if Box offers users a method of holding and managing their own key,” McClure said.


July 11, 2014  2:36 PM

Microsoft Scouts InMage, adds it to Azure roster

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Microsoft today acquired cloud disaster recovery vendor InMage, which it will use as part of its Windows Azure cloud services.

InMage Scout combines continuous data protection backup software with replication to move data off-site and to the cloud for DR.

In a blog announcing the acquisition, Microsoft VP Takeshi Numoto wrote that Scout already lets customers migrate to Azure, but Microsoft will integrate the software more deeply with its cloud. Microsoft will sell Scout through Azure Site Recovery, which supports replication and recovery of an entire site directly Azure.

“This acquisition will accelerate our strategy to provide hybrid cloud business continuity solutions for any customer IT environment, be it Windows or Linux, physical or virtualized on Hyper-V, VMware or others,” Numoto wrote. “This will make Azure the ideal destination for disaster recovery for virtually every enterprise server in the world. As VMware customers explore their options to permanently migrate their applications to the cloud, this will also provide a great on-ramp.”

He added that Microsoft will work with managed service providers who sell InMage’s ScoutCloud DR as a service.

It appears that Microsoft’s strategy for InMage technology is similar to the one it has followed with the cloud storage gateways it acquired from StorSimple in 2012. Although StorSimple supported other large public clouds before the acquisition, Microsoft has integrated the gateways more tightly with Azure and now sells them only to customers who have Azure subscriptions.

The InMage acquisition certainly fits into the “cloud-first” strategy Microsoft CEO Satya Nadella laid out for employees Thursday.

InMage had $36 million in venture funding. Microsoft did not disclose the acquisition price.


July 11, 2014  12:50 PM

CA’s arcserve goes out for a spin

Dave Raffo Dave Raffo Profile: Dave Raffo
ArcServe, Backup software, Data protection, Storage

When CA Technologies launched its arcserve Unified Data Protection (UDP) platform in May, it was considered a new direction for the backup and recovery platform. It turns out the direction it is going is away from CA. This week the arcserve team revealed plans to spin out of CA, which signed an agreement with Marlin Equity Partners to divest the assets of the data protection business.

The move is similar to Syncsort’s data protection team spinning out of the parent company last year, eventually re-branding itself as Catalogic Software.

Mike Crest, GM of CA’s data management business unit, will become CEO of the new arcserve company. It will have headquarters in Minneapolis (CA is in New York), and all 500 or so of the arcserve product team are expected to join the new company.

The reason for the spinoff? CA is focused on large enterprise customers (it will retain its mainframe backup software) while arcserve software is a best fit for SMBs and small enterprises.

Crest wrote in a letter to arcserve customers:

“For CA Technologies, the divestiture of arcserve is part of a portfolio rationalization plan to sharpen the company’s focus on core capabilities such as Management Cloud, DevOps and Security across mainframe, distributed, cloud and mobile environments. As part of that plan, CA has a strong commitment to thoughtfully placing divested assets, such as arcserve, in environments that benefit enable customers, partners, employees and shareholders mutual gain.”

As arcserve VP of product marketing Christophe Bertrand put it, “the markets we serve are not the traditional markets CA serves today. CA will continue to sharpen its focus and manage its portfolio accordingly. It made perfect sense to look at this as the next step for the arcserve business.”

Bertrand and arcserve VP of product delivery Steve Fairbanks said the new company will build its technology around the new UDP platform. UDP combines previous arcserve data protection products — Backup, D2D and High Availability and Replication — under a common interface along with new features. When UDP was released, Fairbanks called it a re-invention of arcserve.

Bertrand and Fairbanks said they are convinced Marlin will provide all the backing arcserve needs to succeed.

“Marlin has indicated they want to invest in arcserve as a platform,” Fairbanks said. “As we grow organically and we grow revenue, we will re-invest back in the business and grow the size of the company over time.”


July 10, 2014  10:00 AM

Avere grabs funding, points to the cloud and IPO

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage, NAS acceleration, Storage

Avere Systems today closed a $20 million series D funding round that founder and CEO Ron Bianchini said he expects will take the NAS acceleration vendor to break-even and an initial public offering (IPO).

If he’s correct, the route to the IPO will run through the cloud.

Although Bianchini sold his previous storage company, Spinnaker Networks, to NetApp for $300 million in 2003, he envisions a different outcome for Avere.

When asked if he expected this would be Avere’s last funding round, Bianchini said, “that is our plan. I believe there is an IPO in our future. Our next big focus is break-even, and that’s what we hope this takes us to.”

He said the new funding will be used to beef up sales and marketing for Avere’s FXT Series of edge filers. Bianchini said Avere has just over 100 employees now and he expects to add 30 by the end of the year.

He declined to say how many customers Avere has, but said it is in “triple digits,” and most of them are large enterprises. Since releasing version 4.0 of its operating system in late 2013, Avere has focused on separating capacity from performance by pushing data to the public cloud.

“You can put our product and performance anywhere and put the repository anywhere as well, including the public cloud,” Bianchini said. “We play at the intersection of enterprise and cloud. We’re about driving enterprise performance and features, and doing that with data in the cloud.

Avere began in 2009 selling NAS acceleration appliances designed to improve performance while lowering the cost. It later added global namespace and then WAN acceleration that could move data from the data center to the edge with little latency. Version 4.0 of Avere’s FXT software was the piece that completed the cloud picture. Customers can now use Avere’s FlashCloud software as a file system for object storage and move data to the cloud without requiring a gateway. Avere supports Amazon’s public cloud, and Bianchini expects to add OpenStack and Ceph support soon.

Along the way, Avere went from a company that could help storage giants like EMC and NetApp drive better performance to one that could reduce the need to use their NAS boxes. “With the cloud, people can replace their storage infrastructure with the cloud for the capacity piece,” he said. “I imagine we’ll have fewer friends on the enterprise space, because you can take them out now.”

The big vendors have their own plans for the cloud, which EMC made clear this week with its TwinStrata acquisition.

The funding round, led by Western Digital Capital, brings Avere’s total funding to $72 million. Previous investors Lightspeed Venture Partners, Menlo Ventures, Norwest Venture Partners and Tenaya Capital participated in the round.


July 8, 2014  5:34 AM

EMC acquires TwinStrata, refreshes VMAX, Isilon arrays

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC, Isilon, TwinStrata, XtremIO

EMC World 2014 in May was light on product launches, as the vendor did not make any updates that were expected to its storage array platforms. Those launches came today when EMC refreshed three of its storage platforms during an even in London, bringing out its new-generation VMAX and Isilon systems and expanding its XtremIO all-flash array family.

EMC also threw in a surprise acquisition, picking up cloud storage controller vendor TwinStrata for an undisclosed price. EMC will use the technology to embed cloud access capabilities into its VMAX high-end enterprise SAN platform.

TwinStrata’s CloudArray supports public clouds such as Amazon AWS and Google Storage Cloud. It was one of a handful of startups that came out with gateways that cache hot data on-site and move older data to the cloud a few years back. Microsoft acquired TwinStata competitor StorSimple in 2012 to use as a gateway to its Windows Azure cloud.

TwinStrata, which began selling its cloud gateways in 2010, supports iSCSI block and file storage and has added cloud disaster recovery and cloud-to-cloud migration features in recent years.

Jeremy Burton, EMC president of products and marketing, described TwinStrata as “very focused on block storage” and said its technology “allows us to go from VMAX to any private cloud or the public cloud, and treat  the public cloud as another tier of storage.

Barry Ader, vice president of product management for VMAX, said EMC will continue to sell TwinStrata’s CloudArray but is most interested in using its software inside of VMAX. CloudArray is available as a physical or virtual appliance. Ader said EMC will evaluate if TwinStrata technology is a good fit for other EMC platforms but TwinStrata CEO Nicos Vekiarides and the rest of TwinStrata’s employees will become part of the VMAX team.

Other EMC product launches today include VMAX 100K, 200K and 400K arrays with integrated storage services such as data protection, Isilon S210 and X410 scale-out NAS arrays and an upgraded OneFS file system, and an XtremIO entry-level X-Brick with the addition of inline compression to the all-flash platform.

For more on the VMAX launch, click here. For more on the Isilon launch, click here. For more on XtremIo, click here.

 


July 3, 2014  7:36 AM

Storage management requires continuous learning

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

Despite obvious evolution of the storage industry, the problems in storing and managing information are the same ones that have been there since the days of Data Processing in the 1960s. The problems have actually grown, with nuances that make them harder to solve.

There are solutions and technologies that provide abilities to deal with the magnitude of data, the performance demanded for business, and cost economies in a competitive world. The problem is keeping up with those storage developments. There are inflection points in the storage industry where technology makes a dramatic difference in capabilities. These inflection points spawn new products, refinements of the technology, and promises of the next new thing that will become another inflection point. For those in IT organizations responsible for storing and managing information, it is a continuous task to keep with these developments.

These developments lead to new products and solutions that will improve operations. They provide greater performance and efficiency, and simplified management at a lower cost.

The new technologies are often the most effective way to address increasing capacity demands and the problems those demands create. Discussions from outside the IT organization — at conferences, vendor presentations, and in published articles — move quickly to the new products and solutions. Executives in the company will hear about a new technology and ask, “Why aren’t we doing this?” Technical professionals define their careers by applying technology to solve problems. If they don’t fully understanding the latest technology, they may feel that their careers are compromised and limited.

All this means that the importance of continually learning about developments in the storage industry is critical. With the pace of change experienced over the last 20 or 30 years, it does not take long to fall behind the knowledge curve.

Continued learning is the responsibility of the individuals in IT. This is really the admission that “you need to look out for yourself.” A company with wise management will recognize the increased value that IT staff will bring by learning about the new technologies and products and how to best utilize them. It may be up to the IT director or CIO to take the initiative to facilitate the education required to remain current with the industry. Regardless, individuals must also take responsibility.

Some industries have developments that occur at a rapid pace while others seemingly plod along with few changes during the career of technical people. Storage is in the former category. I was reminded of this on a long flight for vacation with a group of other engineers in different fields that I was involved with in college. While on the flight, I was catching up on technology and product developments by reading many documents I had brought with me. Between briefings I receive from vendors and what I research on my own, I seem to be in constant learning mode. The others with me asked about reading so many different articles and other documents. They were surprised when I explained the amount of time required just to keep up with happenings in the storage industry. Each of them said they did not need to spend much time focused on new developments in their discipline.

Not keeping up with the latest storage technology and products results in IT staff falling behind quickly and failing delivering the greatest value possible. It is a career issue and a company competitive issue.

Conferences and education classes (especially ones delivered by independent organizations) are effective means for getting information quickly and in concise formats. There are education opportunities out there – take advantage of them.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 30, 2014  6:07 PM

Crossbar’s Resistive RAM claims milestone, but future ship date remains unclear

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Many enterprises are still getting a handle on the best ways to use NAND flash, but they can already scope out some of the potential successors to the solid-state storage technology.

Crossbar Inc. specializes in non-volatile 3D Resistive RAM (RRAM), which promises greater endurance and higher density than NAND flash. The Santa Clara, California-based startup claims that it reached an important milestone toward commercializing terabyte-scale memory arrays that can use a chip the size of a postage stamp.

Crossbar announced today its demonstration of pre-production 1 MB arrays that use its patented 1TnR – which stands for 1 Transistor driving n Resistive memory cells – for read/write operations. Crossbar said a single transistor was able to drive over 2,000 memory cells at low power to produce exceptionally dense solid-state storage.

Yet, although Crossbar’s RRAM carries the potential to achieve higher density and greater endurance at lower power than NAND flash, it’s an open question if or when the technology could reach mass production and turn up in shipping products.

Jim Handy, chief analyst at Objective Analysis in Los Gatos, California, said RRAM is a good candidate to replace NAND flash once NAND costs stop scaling, likely after 3D NAND runs its course. He said every NAND flash maker is looking at RRAM as the heir apparent, and Crossbar has a shot at the market.

“In NAND, it’s all about cost,” Handy noted. “Even though RRAM has significantly better specs – like endurance, over-write, speed and power – in the end, the lowest-cost solution always dominates the market.”

Handy said the Crossbar technology is interesting since it doesn’t require a separate select diode, as nearly every other technology does. “Select diodes have been pretty tricky so far. What we need to find out is whether it ramps into production smoothly, and it’s too early to know that yet,” he said.

Alan Niebel, founder and CEO of WebFeet Research in Monterey, California, said Crossbar’s emerging RRAM “could fall on its face if the technology hits a hurdle or no major manufacturer assists Crossbar in further developing and manufacturing it.”

But, Niebel gave Crossbar’s RRAM an 80/20 chance of seeing the light of day in shipping products, as long as the company partners with an integrated device manufacturer (IDM) or original equipment manufacturer (OEM). He speculated that Crossbar’s RRAM could reach enterprise products around 2018.

Crossbar said today that it is finalizing agreements with several leading global semiconductor companies and plans to announce its first licensing agreements shortly. Commercial shipments in enterprise products are expected in 2017, according to Sylvain Dubois, vice president of marketing and business development at Crossbar.

In the meantime, potential use cases for Crossbar’s RRAM are an interesting topic of discussion. Robin Harris, chief analyst at StorageMojo in Sedona, Arizona, said that if Crossbar gets a terabyte on a single die the size of a thumbnail, one possible destination could be a non-volatile RAM module for an in-memory database.

“What if you could have a server that had 64 TB in non-volatile main memory? You can put a very large in-memory database in there,” said Harris. “Then the implications are: ‘Gee whiz, all of a sudden I don’t need my EMC VMAX. There’s a whole bunch of storage that I probably wouldn’t need, and of course, eventually it would be a lot cheaper than buying a VMAX or some other high-performance array.’ ”

But, Harris said it’s difficult to pinpoint the best place to leverage the Crossbar RRAM technology until we know more about the performance characteristics and price. He said it’s quite possible the initial application will be low-power mobile devices such as tablets and smart phones.

Harris expressed optimism about Crossbar’s chances of success in creating an economically viable product. He said the company’s co-founder, Wei Lu, an associate professor at the University of Michigan, is an “extremely smart guy and cutting-edge researcher,” and Crossbar has top-tier venture capital backing from firms such as Kleiner Perkins Caufield & Byers. Harris added that Crossbar’s RRAM appears to be conducive to cheap manufacture at existing semiconductor fabs.

“The real test comes when the fab starts sampling chips,” said Harris. “That’s when you discover if you really have a market.”


June 27, 2014  1:09 PM

Dell-Nutanix deal can be hyper-contagious

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, Nutanix, SimpliVity, Storage

Dell and Nutanix took a lot of people by surprise this week when they disclosed an OEM deal that will result in Dell selling Nutanix hyper-converged software on PowerEdge servers. If this plays out like other emerging technologies have, you can expect more established storage vendors to partner with – or acquire – hyper-converged technology that bundles storage, networking, and virtual servers.

There was an erroneous report a few weeks ago that Hewlett-Packard was set to acquire SimpliVity, a rival of Nutanix. That was likely the result of rumblings in the industry about large vendors looking to add this type of technology. It’s no secret that EMC is working on its own hyper-converged system – Project Mystic – and most of the major server hardware vendors will have pre-packaged hyper-converged bundles running VMware’s Virtual SAN (VSAN) software.

‘This speaks volumes about the validation of our technology and our product,” Nutanix VP of marketing Howard Ting said of the Dell deal.art

When a hot storage technology starts gaining momentum, the major vendors always look to jump on the bandwagon. They sometimes develop their own, but it is easier to buy it from a startup that has spent years and hundreds of millions of dollars developing it. Big vendors looking to take a shortcut into hyper-convergence have a few options but not too many. Software-only hyper-convergence startup Maxta would make a good target. SimpliVity CEO Doron Kempel says his company is not for sale, but can be an OEM partner despite requiring a PCIe card to carry out its inline deduplication without impacting performance.

“We have the technological capability to do that,” he said of an OEM deal. “We can give a large server vendor functionality of our stack, which includes the software and card.”

Kempel insists that he has never discussed a sale, OEM deal or any other partnership with HP, however.

“I’ve never spoken with HP and never had any M&A discussions,” he said. “We have no commercial relationships with them. We’re not being acquired by anyone, not even for $2 billion.”

Taneja Group consulting analyst Arun Taneja said the Dell-Nutanix deal “puts hyper-convergence in hyper-drive” and predicts more deals. “Hyper-convergence is a totally tomorrow type of product,” he said. “It’s taking all these technologies that at one time were best of breed but now you can integrate them.”

Sanbolic CEO Momchil “Memo” Michailov predicted the Dell-Nutanix deal will spark a feeding frenzy of partnerships involving other types of server-attached storage software as well.

“This is classic Dell: take a high-value product, smash it into the channel and roll out a commodity offering,” Michailov wrote in an e-mail statement. “As an entry-level platform, this might be the right move for Nutanix and lead to a successful IPO down the line. From a broader industry perspective, the move is the first in what I expect to be a line of strategic plays to move billions of dollars in market value from storage back to the server layer. HP, IBM/Lenovo and others won’t be far behind, but the real enterprise market opportunity relies on software-defined storage players that can offer tier-one, scale-out capabilities.”


June 26, 2014  1:58 PM

HP brings Helion services to the cloud

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

HP has renamed an extended version of its managed storage and backup services as Helion.

Formerly called Storage Management Services and Backup Restore services, Helion now is positioned as a managed cloud offering that uses HP 3PAR StoreServ and HP StoreOnce Backup systems as the underlying hardware. The offering can be customized for private, virtual private and public cloud. It leverages block, file and object storage along with backup services.

“We collapsed a set of servers and provisioned gigabytes of storage into a single offering,” said Mike Meeker, HP’s global offering manager for HP enterprise services.”The focus is extending the existing offering. We have a core set of services and structures. With that structure comes the ability of multiple features.”

Meeker said the offering can be customized. For instance, multiple-tiers in a SAN or block array can be configured for high performance applications that require low latency. The service also has tiers of files services that can be used for numerous use case based on the type of application and databases across private, virtual private and public clouds.

Its customized configurations leverage block, file and object storage as well as backup services powered by HP 3PAR StorServ and HP StoreOnce Backup systems. The different tiers and features are front-ended with a new portal.

The block-based service targets application and databases that need high-performance via Fiber Channel communications. The file storage is for NAS to provide file-level storage accessed via the Internet protocol. The object storage provides backup applications the ability to add, delete and modify files via REST or HTTP protocols. Server backup creates copies of data for services in the data center and remote offices.

“They now have the flexibility to pick and choose different tiers, at different price points and different application needs,” Meeker said.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: