Storage Soup

April 27, 2016  1:34 PM

Scality’s RING now pre-integrated with new Dell cloud storage system

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Dell and Scality recently added a highly dense, purpose-built cloud storage system that is pre-intergrated with the RING object storage software to their reseller product lineup.

The SD7000-S cloud storage system scales to 688TBs of raw storage in a 4U form factor or 6.9PBs of capacity in a single rank. The jointly engineered SD7000-S has two server nodes, with two Xeon E5-2650 v3 processors per node and a 10GbitE dual port network interface card. There are 90 hot-plug, 8TBs, 3.5-inch disk drives in the 4U enclosure.

The Scality RING can be deployed with three SD7000-S storage servers. The software provides multi-petabyte storage for unstructured data with a single distributed namespace across a single or multiple sites. It has access for file and object storage with optional OpenStack APIs.

The RING software uses a de-centralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster.

Scality also has a reseller agreement with HP that became official in October 2014, with the Scality software running on HP Proliant Servers.  In August 2015, Scality scored its deal with Dell when it was added to the company’s Blue Thunder program that combines software-defined storage with Dell servers.

April 25, 2016  6:07 AM

Panzura enables ‘in-cloud NAS’ for Microsoft Azure

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Panzura’s Global File System (GFS) is certified to run in Microsoft Azure, giving customers a second “in-cloud NAS” option with a major cloud provider.

The GFS, running on Panzura’s Cloud Controller, has been available in Amazon since November 2013. But Barry Phillips, chief marketing officer at Panzura, said the company didn’t see Microsoft Azure object storage taking off from a storage perspective until last year.

“It went from not having many on Azure storage to having a large number on Azure storage,” he said.

Phillips said, under a typical scenario, customers move all of their unstructured file data into a public or private “cloud bucket.” Panzura caches the hot data on premises in a controller that runs in a physical appliance or a virtual machine. Panzura sells all-flash and hybrid cloud controller for on-premise use. Colder data that customers rarely use is stored in the cloud. Panzura supplies the global file system to interface to object storage such as Microsoft Azure, Amazon S3, Google, EMC’s Atmos and IBM’s Cleversafe.

By enabling the Panzura Cloud Controller to run in Azure and Amazon, Panzura is giving customers the opportunity to use the same global file system on premises and in the public cloud. Customers also have the option to run their applications in Azure and Amazon and use Panzura as in-cloud NAS, with no on-premise file storage.

The Panzura Cloud Controller is available on the Microsoft Azure Marketplace.

“We fundamentally believe going forward that as companies move to the cloud, being able to put all of their file system in the cloud itself whenever possible is something that they’ll be looking to do,” Phillips said. “Of course, if the distance to their office is too far from any cloud, then they can certainly run one of our on-premise cloud controllers.”

He said remote access of a file system over a long distance would be slow because of bandwidth and latency. But a customer could mix and match, with some branch offices running on-premise controllers while others have no on-site infrastructure and use only the in-cloud NAS, he said.

Phillips said customers also are able to mesh together file systems in Azure and in Amazon using the Panzura software. “It’s not an either/or with us,” he said.

Microsoft Azure operates data centers in 22 regions around the world. Locations include California, Texas, Illinois, Iowa and Virginia in the U.S.

“As more and more companies want to move their infrastructure into the cloud but still have on-premise performance, then having those data centers in the middle of the U.S. is helpful,” Phillips said.

The Panzura Cloud Controller provides capabilities such as global file locking, to enable users to work with applications built for local use over a wide area network (WAN), global snapshots, deduplication and compression, and security for data at rest and in transit between controllers and the cloud.

“Our customers essentially have Panzura controllers and a cloud bucket. That is all,” Phillips said. He said workflow, operational expenses, and maintenance of backup and archive go away, because cloud providers such as Amazon store multiple copies of the data and can withstand two data centers going down.


April 21, 2016  4:01 PM

Pure founder: FlashBlade has more potential than FlashArray

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage

Since introducing its FlashArray all-flash SAN array in 2012, Pure Storage has sold more than $600 million worth of a type of storage array that didn’t exist before Pure was born.

The man who led the design of that system says he expects its second all-flash platform to eventually sell even more.

Pure founder and chief architect John Hayes says the market for Pure’s FlashBlade scale-out NAS and object system that will launch later this year has a larger potential market because unstructured data is growing much faster than the structured data that FlashArray is built for.

“Ultimately, it’s a larger use case,” Hayes said of FlashBlade. “We looked at all infrastructure data, everything from files to archives. That’s a broad target in the data center and today you have all of these different products optimized for different points. Our theory was that we could actually hit all these optimization points. It’s also the area that’s growing fastest. Databases and virtual machines aren’t high data growth, that’s like 10 percent a year growth. All the unstructured data is growing around 40 percent a year. And the variety of applications for unstructured data is increasing. We’ll sell both platforms into a lot of organizations. We’ll sell FlashArray to the IT team and FlashBlade to the engineering team.”

Hayes said FlashBlade, which uses object storage with a file system, is built to accommodate thousands of severs and traditional storage arrays cannot handle that load even if they are filled with solid-state drives.

“We believe in using the network because networks are getting much better,” he said. “It’s also about taking away the limits. Why do people want to use [Amazon] S3, for example? A big part of it is because it’s unlimited. You’re not creating a problem in the future where you want be able to store enough data. That’s why we wanted to make a box that’s really an unlimited data store that’s attached to as many computers as you need to attach to it.”

Not everyone agrees with Pure’s vehicle for expansion. In a blog posted on his company’s web site, Coho Data CTO Andy Warfield said FlashBlade’s architecture has problems. Warfield wrote that Coho Data considered a similar product  in 2013 before scrapping plans. He criticized FlashBlade, mainly because it uses proprietary flash hardware and is not flexible enough to be a true scale-out  system.

Hayes seemed more confused than upset by Warfield’s criticism. “I read it. I don’t really understand his point of view,” Hayes said. “I don’t know what to say. They’re building stuff, we’re building stuff. I don’t have much to say about it.”

Hayes also doesn’t have much to say about whether Pure will expand into other types of products, except that any new offerings may address another market. “I think between the two products we have, we’ll be able to cover almost all storage in the data center,” he said. “If we launch any new products, it’s probably in a different category.”

They won’t be software-only, despite Hayes’ background with software companies before Pure. He said software is the key to success for any all-flash system but a software-only product makes little sense.

“It’s an enormous amount of work to establish hardware compatibility,” he said. “It’s going to take us more engineering to ship a software-only product. I don’t understand what the customer benefit is going to be if they have to integrate the software and hardware themselves. It probably won’t save them any money.”


April 20, 2016  6:28 PM

Gridstore to CEO: Great job, now get lost

Dave Raffo Dave Raffo Profile: Dave Raffo

Gridstore apparently grew so fast under CEO George Symons that its board decided to change CEOs to keep up with the rapid growth.

Gridstore founder and CTO Kelly Murphy has moved back into the CEO role on an interim basis until the hyper-converged vendor finds a replacement for Symons.

“Gridstore closed out a record year in 2015 in both revenue and customer acquisition and launched 2016 with a new round of investment,” Gridstore VP of corporate communications Douglas Gruehl said in an e-mail statement. “In order to manage the companies’ hyper growth, the board has decided that a new CEO with experience in managing a fast growing company is needed. A search for a new CEO is underway.

“With the new investment Gridstore is expanding rapidly, adding to sales, support, and R&D worldwide; it is truly an exciting time for us.”

When Gridstore closed its $19 million funding round in January, Symons said he was looking forward to growing the business and doubling its headcount, particularly in sales and marketing. But that funding round brought other changes that may have spelled the end for Symons. Gridstore replaced chairman Geoff Barrell with Nariman Teymourian, who ran Hewlett Packard Enterprise’s Converged Systems Division of HPE. Kevin Dillon of Atlantic Bridge Capital, which led the funding round, also joined the board.

Gridistore got a head start on its rebuilding its executive team at the same time bringing in Dell veterans James Thomason (chief strategy officer) and Kevin Rains (chief financial officer) and a VP of sales, Phil Lavery, who also came from Atlantic Bridge.

Symons became Gridstore CEO in 2013, after stints as a CEO at Yosemite Technologies and Evostor, CTO at EMC, COO at Xiotech and chief strategy officer at Nexsan. He transformed Gridstore from a company that sold storage appliances for Microsoft Hyper-V to an all-flash hyper-converged vendor, still focused on Microsoft. In January, Symons said Gridstore revenue grew 343% year-over-year in 2015. “It surprised me how quickly it happened,” he said at the time.

So quickly that his board felt he couldn’t keep up.

April 20, 2016  3:41 PM

EMC sales decline, says Dell deal on track

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, EMC

EMC’s storage sales declined more than expected last quarter as the vendor waits to become part of Dell.

EMC executives offered several reasons for the drop in sales, but not the most obvious one. That would be, customers are reluctant to buy now until they see what happens if and when the $67 billion Dell deal closes.

EMC CEOs Joe Tucci and David Goulden – who heads the storage business – say the decline was due to product cycle transitions and an overall caution in IT spending that caused a backlog of deals. Those reasons are often cited by storage vendors for poor results, and may be valid in this case. But it’s unrealistic to think that none of the reluctance to buy is related to the pending Dell deal.

Tucci did emphasize that the Dell deal is on track to close “under the original terms and under the original timeframe,” on today’s earnings call. The original timeframe called for it to close between mid-2016 and October. And Tucci said EMC’s plans for 2016 call for revenue growth, indicating he expects the sales declines to be reversed in coming months.

Tucci called the Dell deal “a great strategic option” and said “the combination of EMC and Dell creates a powerhouse in the IT industry. Integration planning has accelerated. [Dell and EMC] have developed detailed integration plans to assure we hit the ground running when the merger closes.”

He said regulatory approval has been granted throughout the world except for China. EMC stockholders still have to approve the deal. And of course, the $57 billion in funding must also be secured but Dell and EMC execs have said that is no problem.

Tucci would not comment on what role he would play in the new company, which will be headed by Michael Dell. He didn’t exactly sound like he is resigned to ride off into the sunset for his long-anticipated retirement, though.

“I’m going to punt a little bit, and then I’m going to tell you the absolute truth,” Tucci said when asked about his role after the Dell deal. “To me, this is all about making sure it’s a good deal for our customers, our shareholders and our people, and they’re all priority number one to me. And it’s not about me. I have a lot of energy left, I’m going to continue to work doing different things. Potentially I could help advise Michael, but I just don’t want to go there yet, and Michael and I have not gone there yet.”

EMC Information Infrastructure (the storage group) reported $3.8 billion in revenue for the first quarter, down six percent year-over-year. Storage product revenue of $1.96 billion dropped 10%, partly because of $75 million worth of unfilled late orders. EMC II CEO Goulden said sales of its XtremIO all-flash storage asold well, and the VMAX All-Flash array is one of the new products he expects to pick up steam.

Goulden said he expects VMAX All-Flash arrays to make up at least half of new VMAX sales by the end of the year. And there will be another all-flash array coming at EMC World in two weeks. That will be a midrange all-flash array that will either be part of the VNX family or replace it.

“VMAX All-Flash is a game-changer,” Goulden said. “We will have a major new mid-tier announcement at EMC World that will be the start of a new cycle where the traditional VNX plays.”

April 19, 2016  3:16 PM

Microsoft Azure adds Talon’s CloudFAST to marketplace

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage

Microsoft Azure is throwing its weight behind startup Talon Storage, offering Talon’s CloudFAST file sharing and acceleration software available from the Azure Marketplace.

CloudFAST for Azure StoreSimple is a joint offering between Talon and Microsoft Azure’s StorSimple cloud storage. CloudFAST Core software runs on-premise on StorSimple appliances, which cache the most active data and send other files to Azure. CloudFAST Edge runs in remote offices, caching files and sending them to the Core system in the data center. CloudFAST’s features such as global namespace and file locking allow collaboration among workers in different locations without losing data because it gets overwritten.

Representatives from Talon and Microsoft said they have integrated CloudFAST for StorSimple, and customers can buy CloudFAST directly from the Azure Marketplace instead of buying CloudFAST from Talon and setting up an Azure account.

CloudFAST for StorSimple costs three cents per GB per month. That would come to $9,200 per year for 25 TB of data center file storage, $36,800 for 100 TB and $148,000 for 400 TB.

While other cloud NAS vendors allow customers to choose between public clouds, Talon senior vice president Charles Foley said Azure is the only cloud partner for CloudFAST. Foley said Azure is the best fit because CloudFAST is used primarily for Windows and is an enterprise product.

“We’re putting our wood behind the Microsoft arrow,” Foley said. “Microsoft is the number one enterprise vendor in the world by virtually any spending survey you look at. Azure is an enterprise cloud platform. Our target customer is not a small business. Our target customer is looking for collaboration and distance- based performance. If you’re separated by oceans and continents, you probably need us.”

Azure partners with many software vendors, but Microsoft Azure director of product marketing Badri Venkatachari said Talon’s file locking adds value for StorSimple customers.

“A lot of our customers feel the need for a data center consolidation and collaboration story,” Venkatachari said. “Talon offers a storage layer with file locking. The storage platform sits in the data center and extends to branch offices. Think of Talon as a software layer and StorSimple as the storage platform.”

Web printing company Cimpress has been using StorSimple and Talon together since late 2015, soon after Talon launched CloudFast for Microsoft Azure File Service.. Mike Benjamin, manager of enterprise applications at Cimpress, said the combination helps him manage file storage for more than 7,000 employees across 50 worldwide locations. He called CloudFAST “tailor-made for my team” because it runs on Windows, and said its file-locking was a feature he sought for years. Benjamin said he previously tried DAS, NAS, SAN, WAN acceleration and other cloud appliances but could not find the required level of performance and user experience.

“Talon allows us to distribute our files so they can be consumed and collaborated on in a global fashion,” he said. “The way our teams are collaborating, they were stepping on each other, and my [IT] team had to ease the burden. We were looking for robust file locking.”

Benjamin also cited Talon’s visual indicator that shows which files are in cache and which are in the cloud. “The user knows if it’s in the cloud it will take an extra second to be brought down,” he said.

He said while CloudFAST isn’t as fast as an on-premise file server, it’s fast enough for the files his users deal with – mostly Office files. “You’re not going to get the same performance you get with a local file server but you get the global collaboration, so there’s a trade off,” Benjamin said.

April 12, 2016  1:59 PM

Cohesity adds cloud to its converged data protection

Dave Raffo Dave Raffo Profile: Dave Raffo
Cohesity, Data protection

Cohesity, which bills itself as convergence for secondary data, is adding public cloud support to its data protection platform.

Cohesity’s  converged data protection strategy combines data storage for backup, archiving, test/dev and other non-production workloads into one scale-out platform. Today it added the ability to use Amazon, Google and Microsoft public clouds to free up on-premise capacity.

Cohesity’s cloud features are CloudArchive, CloudTier and CloudReplicate.

CloudArchive lets customers set policies to archive datasets off Cohesity for long-term retention to Google Nearline, Microsoft Azure and Amazon S3 and Glacier services for cold data.

CloudTier moves seldom-accessed blocks into the same public clouds, but not their cold data services. CloudTier moves data that must be accessed occasionally and isn’t yet ready for long-term archiving. It tiers the data after a given capacity threshold is met to insure an on-premise cluster never runs out of space.

“With CloudTier, the cloud is acting like a remote disk,” Cohesity CEO Mohit Aron said. “With CloudArchive we’re moving a full image and essentially providing an alternative to tape.”

CloudReplicate copies local storage instances to public clouds or remote private clouds. Customers can spin up new instances in the cloud to recover data to on-site appliances.

Customers set the cloud target through the Cohesity Policy Manager. For instance, a customer can set all backups associated with a policy to move to CloudArchive once a week and retain snapshots for 120 days

CloudArchive and CloudTier are available now. CloudReplicate is expected later this year. They are included in Cohesity’s base product for no cost, but customers must subscribe with the public cloud vendor they choose.

Cohesity VP of product management Patrick Rogers said the cloud integration fits with Cohesity’s strategy of converging all non-primary storage onto its platform.

“Customers say the model of having distinct backup software, backup targets and archives has to change,” he said. “We also believe they will continue to have significant on-premise infrastructure. They will use the cloud for the economic advantages and scale that it provides them, but maintain on-premise infrastructure for regulatory and competitive reasons.’

Enterprise Strategy Group senior analyst Scott Sinclair said using the cloud for selected data sets will give Cohesity customers flexibility in the way they use public clouds.

“Secondary storage can be considerable, running to hundreds of terabytes or petabytes,” Sinclair said. “If you move all of that off to Amazon and find out it’s more expensive than you thought, getting that back [on-premise] is difficult. Cohesity lets you move some of those copies to the cloud as a tier or move essentially snapshots to the cloud in an archival fashion. Organizations don’t always understand their workloads. They might say ‘No one ever accesses this, let’s move it to the cloud.’ Then they realize it’s being accessed by quite a few people in the company. Cohesity lets you move data to the cloud and if it doesn’t make sense, you can move it back.”

April 11, 2016  5:33 PM

IBM Research builds ‘cognitive’ system to cut big data storage costs

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

IBM researchers are developing a cognitive storage system designed to automatically differentiate high- and low-value data and determine what information to keep, where to store it and how long to retain it.

Zurich-based IBM Research scientists Giovanni Cherubini, Jens Jelitto, and Vinodh Venkatesan introduced the concept of cognitive storage in a recently published paper in the IEEE’s Computer journal. The researchers consider cognitive storage a way to reduce costs to store big data.

The IBM Research team drew inspiration for cognitive storage from its collaborative work with the Netherlands Institute for Radio Astronomy (known as ASTRON) on a global project to build a new class of ultra-sensitive radio telescopes, called the Square Kilometre Array (SKA).

The SKA won’t be operational for at least five years. Once active, the system will generate petabytes of data on a daily basis through the collection of radio waves from the Big Bang more than 13 billion years ago, according to IBM. The system could reap significant storage savings if it could filter out useless instrument noise and other irrelevant data.

“Can we not teach computers what is important and what is not to the users of the system, so that it automatically learns to classify the data and uses this classification to optimize storage?” Venkatesan said.

Cherubini said the cognitive system draws a distinction between data value and data popularity. Data value is based on classes defined by random variables fed by users, and it can vary over time, he said. Popularity deals with frequency of data access.

“We like to keep these two aspects separate, and they are both important. They both play a role in which tier we store the data and with how much redundancy,” Cherubini said.

The cognitive storage system consists of computing/analytics units responsible for real-time filtering and classification operations and a multi-tier storage unit that handles tasks such as data protection levels and redundancy.

Venkatesan said the analytics engine adds metadata and identifies features necessary to classify a piece of information as important or unimportant. He said the system would learn user preferences and patterns and have the sophistication to detect context. In addition to real-time processing, the system also has off-line processing units to monitor and reassess the relevance of the data over time and perform deeper analysis

The information goes from the learning system into a “selector” to determine the storage device and redundancy level based on factors such as the relevance class, frequency of data access and historical treatment of other data of the same class, according to Venkatesan. The cognitive system would have different types of storage, such as flash and tape, to keep the data.

IBM researchers tested the cognitive storage system on 1.77 million files spanning seven users. They split the server data by user and let each one define different classes of files considered important. They categorized the data into three classes based on metadata such as user ID, group ID, file size, file permissions, file creation time/date, file extension and directories.

Cherubini said the IBM Research team developed software for the initial testing using the information bottleneck algorithm. He said they’re currently building the predictive caching element, “the first building block” for the cognitive system, which he said should be ready for beta testing by year’s end.

“Beyond that, it’s harder to make predictions,” Cherubini said. “If everything goes well, I think we should be able to have the full system developed at least for the first beta tests within two years.”

IBM researchers said early testing has fared well for data value prediction accuracy with the contained data set. But additional research is necessary to address challenges such as identifying standard principles to define data value and assessing the value of encrypted data.

Although the cognitive storage system is designed to classify and manage enormous amounts of data, the researchers said the benefits could extend to IT organizations. Venkatesan said the potential exists for a service-based offering.

“We think that this has a lot of potential for application in enterprises because that’s where the value of data becomes of highest importance,” Cherubini said.

The IBM Research team is looking for additional organizations to share data and ideas and collaborate on the cognitive storage system. Click the following links for contact information: Cherubini and Venkatesan.

April 8, 2016  3:08 PM

Drobo launches mobile file sharing for unstructured data

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Drobo, file sharing

Drobo this week unveiled DroboAccess to enable mobile file sharing on its NAS boxes.

DroboAccess lets customers access and share files stored on Drobo NAS  from any device or location. The capability is available on the Drobo 5N for small businesses and the Drobo B810n for larger configurations.

The new software capability, which is part of the myDrobo suite of applications, allows customers to access and share files on their Drobo with end-to-end security. The mobile file sharing capability also allows users to share files or folders that can be designated read-only or read/write with password options.

“Three out of 10 of our customers are asking for this,” said Rod Harrison, CTO at Drobo.

Harrison said Drobo is using Pagekite as a partner to provide a secure tunnel for the data. Data is encrypted on the Drobo device before it is transferred. DroboAccess is an extension of the company’s myDrobo service platform that encrypts data end-to-end.

“This is something that can be complex getting it all to work for yourself. Your cable wire will have a firewall and you have to figure out the right ports and you have to worry about security,” Harrison said.

DroboAccess currently is available for free on 5N and B810n on the Drobo dashboard. The iOS and Android applications are available for 99 cents on the App Store and Google Play.

When Drobo and Connected Data merged in 2013, there were plans to combine Connected Data’s file-sharing Transporter technology with Drobo hardware, but  Drobo was spun out in 2015 to a separate group.

April 8, 2016  9:35 AM

Veeam sets out to orchestrate DR

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Veeam Software moved further beyond pure virtual machine backup this week by unveiling Veeam Availability Orchestrator, a multi-node hypervisor orchestration engine for disaster recovery.

The Veeam Availability Orchestrator, which will be available the second half this year, is an add-on to the Veeam Availability Suite and Veeam Backup and Recovery in for VMware and Microsoft Hyper-V hypervisors.

The orchestrator software helps manage Veeam backups and replication via a disaster recovery plan that can be tested and automatically documented.

Doug Hazelman, Veeam’s vice president of product strategy, said the orchestration tool helps customers manages cross-replication across locations in the enterprise. Customers can set up policies, test against those policies for disaster recovery and get automated documentation for compliance requirements.

The software will be licensed separately from the Veeam Availability Suite. Veeam has not set pricing yet.

“It will be price per VM rather than out standard per socket,” Hazelman said. “You do have to have the Veeam Availability Suite installed and replication set up. The orchestrator has a separate interface to define the policies. Veeam Availability will hold rules in the event that a failover happens.”

Hazelman said the orchestrator is for enterprises looking to automate DR processes.

In February, Veeam announced it will add a fully functional physical server backup product this year. The company has focused on virtual machine backup and, had resisted supporting physical server backup. But Veeam customer requests as the vendor moves into the enterprise with its Veeam Availability Suite have prompted the change. With the Availability Suite, the company has emphasized what CEO Ratmir Timashov calls protection for the “modern data center” rather than only protecting virtual machines.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: