Barracuda Networks has jumped into the crowded online file sharing pond, and today enlisted Drobo as a partner to help get started.
Barracuda’s Copy cloud file sharing service launched as a private beta last year, and will enter public beta this year. The security/data protection vendor will offer customers of Drobo’s new 5N SMB/prosumer NAS box 5GB of free cloud file storage on Copy. Drobo customers can license additional capacity from Barracuda.
Besides seeding its cloud with potential customers, Barracuda general manager Guy Suter said the partnership can make for a smoother interaction between on-premise and cloud storage.
“To us, the Drobo looks like another device that we synch files to,” he said. “Having local storage and cloud storage interact with each other seamlessly helps your workflow a lot.”
For Drobo, the deal gives its customers a quick way to use the cloud as a complement to the storage inside the box. Erik Pounds, Drobo VP of product management, said he expects customers to embrace the cloud even after they buy on-premise storage. The cloud can serve as backup of critical files.
“A lot of data stored in remote or home offices is inhibited by the four walls of that home or office,” he said. “We’re not afraid of the cloud because the amount of data that needs to be stored and shared is massive. The average data on Drobo storage is 3 TB, so there’s a lot of desire to use both.”
Copy is also available as a standalone service, but Barracuda can use the help from Drobo in making its way among dozens of competitors already in the market, including Dropbox, Box and EMC-owned Syncplicity.
Suter points out the cloud file sharing market is young, and current contenders are still grappling with the best way to serve both users and companies. He said the goals for Copy are to facilitate “easier sharing, and to make it more secure than what’s out there, and company friendly.”
Under the company friendly category, he said Copy gives administrators the ability to create separate areas to keep proprietary company data. “Users can have an area for personal data, but there’s another areas for company data,” he said. “Companies can revoke access to company data.”
Barracuda is known mostly for its firewall products, but it does offer a hybrid backup service based on technology gained when it bought backup software vendor Yosemite Technologies in 2009. Some of that data protection technology is used for Copy.
And you can expect Barracuda to go deeper into data protection. BJ Jenkins joined Barracuda as CEO last October after running EMC’s backup and recovery division.
It’s a sign of the times that news of NetApp’s FlashRay all-flash storage system this week overshadowed its FAS6200 high-end disk array upgrade.
The FAS6200 is the highest performing and largest capacity platform of NetApp’s mainstream storage family. FlashRay won’t be available for another year, and probably won’t approach FAS6200 sales for years.
But flash storage is so much more interesting these days and, besides, it’s not every day that NetApp reveals it is developing a non-Data OnTap storage system.
The FAS6200 hardware isn’t that much different from the previous versions. The exceptions are that the new systems have substantially more memory and support 4 TB drives. The memory boost results in better performance and the larger drives bring the maximum cluster capacity to 65 PB. The FAS6220, 6250, and 6290 replace the FAS6210, 6240 and 6280 arrays and V-Series gateways.
The dual-controller 6220 holds 1,200 drives and 4.7 PB in a 6U chassis with 96 GB of memory. The 6250 and 6290 have two 6U chassis, and each system holds 1,440 drives and 5.6 PB. The 6250 has 144 GB of memory and the 6290 has 192 GB of memory.
Flash can play a big part in these systems, too. The 6290 holds up to 16 TB of total Flash Cache and Flash Pool capacity, the 6250 holds 12 TB of flash and the 6210 4 TB of flash. Flash Cache is controller based and optimizes performance of data throughout the array. Flash Pools accelerate performance of data inside a volume.
The FAS6200 series competes mostly with EMC’s VMAX 10K entry level enterprise system and the higher end of the midrange VNX family, IBM’s XIV and V7000, the larger of Hewlett-Packard’s StoreServ arrays, and Hitachi Data System’s Virtual Storage Platform (VSP) and Unified Storage-VM systems.
Three months after closing its StorSimple acquisition, Microsoft is still keeping its roadmap plans under wraps. The only sign of StorSimple integration so far is what Microsoft calls ASAP – the Azure storage acceleration program.
ASAP is a quick and easy way to purchase cloud storage using StorSimple’s controllers and the Microsoft Windows Azure cloud service. Customers can buy a StorSimple iSCSI storage controller with 50 TB or 100 TB of capacity provisioned to move data to the Azure cloud for a hybrid setup using on-premise and cloud storage. That means the purchase and provisioning are handled in one step instead of a customer having to engage StorSimple and a cloud provider separately.
Mark Weiner, a StorSimple executive and now a director of product marketing for Microsoft storage, said purchasing through ASAP lowers the cost of storage capacity by at least 60% versus traditional storage infrastructure.
Weiner said biggest change since the acquisition is StorSimple’s product has gone global under Microsoft. Before the sale, it was U.S.-focused. When asked if StorSimple still worked with other cloud providers, he said, “technically, there’s no reason why we can’t. But obviously we are focused on a joint solution with Azure, either purchased on ASAP or purchased separately.”
Weiner assures us that StorSimple is expanding and improving its technology under Microsoft, and Microsoft sees cloud storage as a big growth area.
“You will see a lot of ongoing innovation from StorSimple as part of Microsoft,” he said. “I still see my engineering colleagues late in the office, there’s no slowing down.”
Brocade’s new CEO Lloyd Carney said one of the reasons he took the job is because he sees an exciting future for Fibre Channel storage networks.
On Brocade’s earnings call Thursday, Carney spoke on the record extensively for the first time since replacing Mike Klayko as CEO last month. He said technologies such as virtualization, cloud and flash create more demand for FC, especially the newer 16 Gbps switches that Brocade has been selling since mid-2011. Brocade has had the 16-gig switch market to itself because its main rival Cisco has yet to upgrade from 8 Gbps. That is expected to change over the next few months, but Carney and other Brocade executives said Cisco’s entry to 16-gig should pump even more life into FC storage.
Although Brocade has spent a lot of resources developing the Ethernet switching business since acquiring Foundry Networks in 2008, it has continued to lead FC switching market share. Brocade has taken the approach that FC will continue as the dominant storage protocol for the foreseeable future while Cisco maintained the future of storage lies in Ethernet and converged Fibre Channel over Ethernet (FCoE) networks.
“One of the reasons I joined Brocade was it was clear to me from the outside looking in was that Fibre Channel wasn’t dead despite the cloud that our friends at Cisco had put on it,” Carney said. “FCoE didn’t take over the world, and Cisco has drawn back from that now that FCoE has become a bit player in the overall scheme of things. Every trend that’s out there in storage points back to Fibre Channel.
“Fibre’s not dead anymore … I’m confident in the growth of Fibre Channel.”
Carney’s last job was CEO of I/O virtualization startup Xsigo Systems, which Oracle acquired last year. Carney has a lot more experience in networking than storage but said the “SAN market continues to represent an exciting opportunity for Brocade.”
Jason Nolet, VP of Brocade’s Data Center Networking Group, said he welcomes Cisco’s move into 16-gig FC.
“We expect that product to be in the market in the first half of the year and, candidly, we’re excited to see that happen,” Nolet said. “We’ve been telling you guys quarter after quarter that the Fibre Channel market is alive and well and growing, and customers want to continue to invest. We’ve been a lone voice there until now. The fact that Cisco was pushing an Ethernet-only agenda and an FCoE agenda almost exclusively the last several years, and they’re now coming forth with a dedicated Fibre Channel product, is the best testament of all to the strength that remains in this market.”
Brocade reported $362 million in SAN product revenue last quarter — its highest ever — compared to $140.5 million from Ethernet networking and $86.5 million from services. And 42% of its storage revenue came from 16-gig directors and switches.
Brocade projects the demand for storage capacity will increase 37% per year over the next five years. “And as long as storage demands increase, the demand for Fibre Channel will also increase,” Carney said.
The use of solid state-based storage systems is rapidly increasing. So far, solid state technology has been deployed to accelerate applications in specific environments. Successes have been demonstrated for increasing the number of transactions a system can perform, the number of virtual machines per physical server (commonly referred to as virtual machine density), and the amount of virtual desktops supported by a storage system.
The continuing advance of all-solid state storage systems is leading to strategies where primary storage — defined as the most active storage of information for applications — will be all solid state. Planning for all-solid state primary storage requires the fabric infrastructure to be considered as a critical element in delivering the maximum value from solid state technology.
Solid state storage systems have much lower latency than systems design for spinning disks. The latency is measured in microseconds and the systems can sustain a much greater number of operations per second. Solid state technology is really a memory technology and using low-level disk-based access protocols may not be optimum. Faster or more streamlined protocols may reduce overhead and contribute to improvement in reducing latency.
So, what fabric interconnect is best? Most arguments about deploying new fabrics or making an infrastructure change have been based on cost. If the fabric interconnect technology requires additional hardware components to reduce latency, maybe the fabric technology with the lowest overhead or latency is the most economical. Economic valuation needs to be based on the increase in the efficiency of the system. Solid state systems can store and retrieve information faster, allowing applications to generate more transactions per second and deliver more value from the investment in servers and other hardware. The fabric choice to support solid state systems is a much bigger economic potential than the cost of the fabric or administration alone.
Low latency interfaces used today such as Fibre Channel or InfiniBand might deliver the greatest value when economic measures are used. Or, maybe another fabric or variation could evolve and create a disruption in the storage infrastructure. The economic value could make a compelling reason to make a transition.
Infrastructure for storage has always been a slow transition and that trend is expected to continue. But the efficiency and economic value delivered from solid state storage may accelerate a change of some type. The fabric decision will be based on how to enable applications to do more work and provide faster access to information. Initially, primary storage will be the focus for a fabric that can maximize value. Secondary storage may not be as demanding, but a common fabric may be preferred over a specialized fabric. Vendors will promote what they have now as a practical matter but will also look for the competitive advantage in delivering the economic value with future offerings. It just may take some time to evolve.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Alex Bouzari, CEO of HPC storage vendor DataDirect Networks (DDN), said that will change in 2013. He expects big data to come into the mainstream and drag HPC with it. He expects DDN to come for the ride into the mainstream after more than a decade of handling big data needs before anybody called it big data.
“High performance computing has come of age,” Bouzari said, “and it’s now called big data.
“Big data is really the democratization of high performance computing. What was limited to a small number of extreme requirements has become commonplace. Now we’re seeing big data and high growth data across markets – the web, cloud and commercial high performance segments.”
It could be wishful thinking on his part, but Bouzari said big data requirements have already spilled into the enterprise, especially financial services, healthcare and manufacturing. He expects that to result in more cloud storage implementations as companies struggle to store and analyze data for their businesses.
That could be a boon to object-storage systems built for cloud scale, such as DDN’s Web Object Scaler (WOS) system. Bouzari also sees a need to make Hadoop work better with storage built for big data.
“Customers say ‘we love what Hadoop can do for us, but we need it as a product that can solve a business problem,’” Bouzari said. “Customers are looking to maximize the performance of Hadoop. We can greatly accelerate Hadoop.”
Bouzari also sees flash playing a role in big data, although until now the price of solid state storage has made it cost benefit for smaller data sets that need high performance. Bouzari said he is starting to see that change among DDN’s customer base.
“All-flash storage for high performance computing is one of those things that seemed to make a lot of sense, but proved cost prohibitive in many environments,” he said. “We’re did some all-solid state deployments [in 2012] but they were typically deployed as part of much larger IT infrastructures. As the cost of non-volatile memory continues to decrease, the ability to use it to serve huge amounts of content will make solid-state more attractive.”
Scality and Silicon Graphics Inc. (SGI) have signed an OEM deal that combines object storage software with high-performance commodity hardware. It provides a scale-out computing offering for organizations looking to build out systems that can store large amounts of unstructured data.
The deal includes the Scality Ring Organic software that resides on distributed, independent nodes and the SGI Modular InfiniteStorage Server (SGI MIS Server). The server delivers high density by packaging two motherboards together with up to 72 3.2 inch SAS or SATA drives in a single 4 U chassis. Twenty nodes in a 19 inch rack can hold 2.8 Petabytes.
“I think this move is symbolic because partners need each other pull together solutions,” said Ashish Nadkarni, IDC’s research director for storage systems. “SGI did not have native object storage. They (probably) had customers that said they needed object storage but SGI didn’t have it. They would be losing out on this part of the market, so they looked to partner with a hardware vendor.”
Floyd Christofferson, SGI’s director of storage product marketing, said object storage is software-focused “but really software has to run on hardware.”
Nadkarni said the SGI has expertise in high performance computing in large environments with massive storage systems. This gives Scality an opportunity to sell into SGI’s existing customer base.
Scality Ring presents a unified data interface delivering local and remote object and file access. The configuration can offer geo-redundancy and auto-tiering.
“This is a ring architecture that is designed to be scalable. You can keep adding hardware and capacity without compromising performance,” Nadkarni said.
There are other examples of the object storage ecosystem expanding. Cleversafe has formally announced a Solution Network partnership to boost sales of its object-based storage products. The partners include Symantec, CommVault, Cloudera, Riverbed, Cisco, VMware, Intel and Panzura.
“We are focused on a self-certification program that allows vendors to run tests [with Cleversafe software] to ensure the technology works with object storage,” said Russ Kennedy, Cleversafe’s vice president of product marketing.
Information Technology (IT) organizations must continue to innovate and advance their capabilities to deliver services more economically. Without making progress, competitive pressures will overtake IT with the following results:
• The potential to be outsourced will become greater with the promises (irrespective of the reality) of being less costly with more rapid reaction to changing demands.
• Internal groups demanding immediate resources will start looking for alternatives leading to rogue IT operations or rogue use of cloud services. Undoing these decisions can be difficult, both financially and without acrimony.
• Falling behind in IT can hold the company back from maintaining or gaining in a competitive environment.
Several storage technologies can be major contributor to innovation that advances IT. Technologies such as solid state storage for acceleration of access to information can make a major difference in the amount of processing an application can do. Embedded storage system capabilities for data protection using snapshots and remote replication continue to improve with new developments from vendors. Capitalizing on these improvements improve the operational environment and add to moving IT capabilities forward.
But storing and retrieving data with the performance necessary to serve applications is just the first part to consider. How efficiently the information can be managed is the real test of technology advancement.
Information is data with context where the context indicates the value, usage, and form necessary for understanding and using the information. Storage systems that can manage the value and automate the movement to different economic-basis storage and handle data protection provide major advancements in efficiency and ultimately result in greater competitiveness.
IT must make investments in storage technology that advances the capabilities of handling information. Each investment has a return on investment (ROI) payback that can be calculated, and an improvement in total cost of ownership (TCO) over the longer term. More importantly, the investments advance IT capabilities to be more competitive in the face of ongoing pressures.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Although CommVault had a strong quarter to finish 2012, CEO Bob Hammer spent nearly as much time on the backup vendor’s earnings call this week talking about what is coming rather than what happened.
Without going into too much technical detail, Hammer previewed Simpana 10 ahead of its Feb. 25 rollout. He said the next version of the Simpana backup and management software will include current core technologies such as depuplication and snapshots while adding features to help attract customers in new markets.
Hammer said Simpana 10 will include new features for protecting data on mobile devices and analytics – areas that have not been well-served by traditional backup products — and increased capabilities for storing and managing data in private and public clouds.
Some of the features will include next generation archiving, improved integration with shared services, and more comprehensive reporting and monitoring.
“We’re doing all of this in a way nobody else has done it before,” Hammer said in an interview after the earnings call. “We have massive scale and automation and multi-tenancy, and only make one copy for backup and archive.”
Hammer said the changes are needed to reflect the way customers now protect and manage data, which is created much faster and has more regulatory requirements tied to it than ever before.
“Data management technology has to change,” he said. “With mobile data, customers are having problems protecting data because the technologies they’re using today aren’t adequate to back up in an adequate timeframe. They don’t have time to back it up, and when they do, they have difficulty finding it.”
Part of the scalability for Simpana 10 will be inclusion of an object-based repository, Hammer said. He said Simpana will handle metadata separately, the way object-based storage systems do. “And it’s open, so a third party can write to it,” Hammer said. “You can crawl the network and get data in in a different way than we’ve done in the past. You can search the metadata before you search objects, then do a deeper dive on the content itself.”
CommVault is doing fine with Simpana 9. Its $128 million revenue last quarter was up 24% over the previous year and 8% over the prior quarter. Hammer said he expects to hit $1 billion in annual revenue in a few years, with Simpana 10 the catalyst.
Fusion-io has greatly benefitted from Facebook and Apple for more than two years, with those companies buying tens of millions of dollars’ worth of Fusion-IO PCIe flash cards almost every quarter. Now Fusion-io is seeing the downside of having two customers make up most of its business.
Fusion-io dropped its revenue forecast for this quarter to $80 million from a previous estimate of around $137 million because Apple and Facebook won’t make any meaningful purchases this quarter or next. Fusion-io expects to lose from $10 million to $15 million this quarter after making $13.9 million last quarter. Its stock price fell 22 percent Wednesday after it revealed the forecast drop, and fell another 13 percent today.
Last quarter, Facebook bought $41 million of Fusion-io products and Apple bought $19.3 million, making up a little more than half of the Fusion-io’s $120.6 million in total revenue. In the previous quarter, Facebook and Apple bought $33 million worth of Fusion-io products apiece.
Now the spigot has been turned off, at least temporarily. Fusion-io CEO David Flynn spent a lot of his earnings call Wednesday assuring analysts that Apple and Facebook will return to their status of prolific flash buyers after the six-month lull.
“This is really about the timing of when they put in new infrastructure, not whether or not Fusion-io is a key part of that infrastructure,” Flynn said.
He said Fusion-io has proven its value to those companies, adding “when the pull the trigger for deploying is not in our control.”
Fusion-io could have other problems in the coming months. EMC is pledging more PCIe-based flash products following its VFCache caching software launched in 2012. And Seagate this week invested $40 million and signed a reseller deal with Fusion-io rival Virident Systems. Seagate will sell Virident PCIe flash products through its OEM partners and channel partners.
Flynn shrugged off the Seagate-Virident news, saying “the business is not driven at the OEMs. The business is driven at the end-user customer. So having companies that build components, really they won’t be at the point where the competition is actually happening, and that’s with the end users.”
On a more optimistic note, Flynn said the ioScale cards Fusion-io launched this month are off to a good start. The high-density low-cost ioScale cards scale to 3.2 TB and are aimed more at cloud providers and telcos than enterprise’s mission-critical apps. Flynn said ioScale products were involved with five of the vendor’s 10 $1 million-plus orders last quarter.