Storage Soup


August 11, 2014  2:25 PM

Axcient new virtual appliance faster, Hyper-V support on the horizon

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Axcient introduced the second generation of its back up and disaster recovery virtual appliance that now is available in smaller storage capacities with reduced  backup times. The new appliance allows companies to replicate data, applications and virtual machines into the cloud for granular system protection and disaster recovery in VMware environments.

Daniel Kuperman, Axcient’s senior product marketing manager, said the company has had 350 deployments of the Axcient virtual appliance and 80 customers. The virtual appliance launched last March, which works similarly to the hardware appliance that Axcient sold since 2009, handled 20 TB backup data. It provided local and cloud replication, local server failover, and granular local recovery of files, folders, applications and images.

The new version comes in several form factors and capacities of 1 TB, 2TB, 4 TB, 6TB, 10 TB, 14 TB and 20 TB, and Axcient said it has a 50 percent reduction in backup times.  It supports hardware running AMD processors, while the previous version only worked with Intel servers.

“Customers wanted it to be faster. They wanted it to be a lighter storage footprint,” Kuperman said. “They also wanted it to run on hardware, specifically they want to repurpose an existing hardware appliance they had.”

Kuperman said the improved backup speed comes from tweaks Axcient made to the change block capability. The virtual appliance pricing starts at $14 a month for the 1 TB version.

Kuperman said Axcient plans to support Microsoft Hyper-V, although there is no timeframe for that yet.

“It’s significant that we are speeding up the process for Hyper-V,” he said. “”We are getting requests from multiple fronts, froms MSPs who say their customers are coming to them (with the request).”

August 11, 2014  11:25 AM

Quantum queues up cloud file sharing technology from Symform

Dave Raffo Dave Raffo Profile: Dave Raffo
Private Cloud, Quantum, Storage, StorNext

Quantum added cloud services and file sync and share technology through its acquisition of Symform last week. The interesting part will be to see how it implements these technologies with its current archiving and backup products.

Symform claims 45,000 customers use its services, mostly consumers and prosumers who pool their own hardware to form a cloud. Symform said its cloud includes petabytes of storage and billions of data objects under management. Quantum will continue that service, but is more interested in the platform and technology than the pure sync and share business.

Janae Lee, Quantum’s senior vice president of strategy, said Quantum will expand the technology to fit its enterprise business model.

“Clearly, that’s not the way we would run that type of service in the markets we participate in,” she said of Symform’s consumer and prosumer model. “This is a platform and technology acquisition. We’ll adhere to what they’ve done but monetize it in a more traditional business-to-business way.”

Quantum did not disclose the purchase price, but said it is hiring Symform’s development team including the startup’s founder and CTO Bassam Tabbara.

The Quantum news release about the deal said the Symform technology will augment Quantum’s Q-Cloud data protection service and the nearline private cloud offerings that are part of its StoreNext and Lattus platforms.

Lee said the deal does not mean Quantum is looking to compete with public clouds such as Amazon or with file sync and share vendors such as Dropbox and Box.

“We don’t want to compete with Amazon. That’s not a winning model,” she said.

As for file sharing, she said, “that’s becoming a pretty populated market. Everybody’s having one now. As a standalone business, that would be a difficult market to penetrate. It needs to be part of a larger solution. We can package these technologies together [with current Quantum products].”

She added that Quantum customers have expressed interest in hybrid and private clouds as well as application services.

“We can take this technology that’s been proven in a public cloud and apply it as a private cloud model,” Lee said. “We can go to a large enterprise customer and say, ‘Hey, this has worked with 45,000 end points.’”


August 10, 2014  7:55 PM

NVDIMM makers try to find their niche in flash ecosystem

Rich Castagna Rich Castagna Profile: Rich Castagna
DRAM, MCS, NVDIMM, Solid state disks

NVDIMM technology, one of the latest players searching for a position in a flash field that’s growing more crowded, shares enough traits with DRAM memory, DIMM-based flash storage and other solid state products to make it hard to pin down. The number of vendors touting NVDIMM products at the recent Flash Memory Summit attests to the growing interest in so-called bridge technologies that may be able to narrow the gap between traditional server memory and storage subsystems.

DIMM-based flash storage—often referred to as memory channel solid-state storage (MCS)—is, itself, a new alternative on the flash storage menu. It uses server DIMM slots that typically reserved for DDR3 (or DDR4) DRAM memory to provide fast, low latency solid-state storage for the host server. But MCS is intended as a flash caching alternative or as persistent storage and doesn’t extended or otherwise enhance a server’s memory.

NVDIMM, on the other hand, is closer to DRAM—in fact, it’s basically the same memory technology equipped with super caps that make it nonvolatile and, therefore, more stable and predictable than DRAM which needs a steady power source to maintain its contents.

While many of the vendors were showing products that were either in limited production or pre-production, the consensus was that the anticipated price for NVDIMM products was likely to be anywhere from three to five times the cost of DRAM. That price alone clearly takes NVDIMM out of the realm of flash storage and as long it remains that high, places it firmly in the “niche product” category. Also, given the cost of the technology behind NVDIMM, capacities of the available products tend to be more along the lines of standard DRAM rather than the much cheaper flash-based storage.

But according to Tinh Ngo, director of business development—data communications at Viking Technology, as key server component such as Intel begin to tailor their products to use NVDIMM efficiently, a broader market should develop. He noted that SuperMicro is currently selling servers that support NVDIMM.

Enmotus, a startup that specializes in automated tiering software, can tier virtually any type of storage installed in a server, including hard disks, all forms of flash (PCIe, M.2, SATA, SAS, etc.) and NVDIMM, according to Adam Zagorski of Enmotus’ marketing team. This gets storage tiering about as close to memory as it can get these days, narrowing the gap by treating a memory technology as persistent storage.

WinDawn Technology, based in Wuxi, Jiangsu, China, demonstrated their NVDIMM product, which they claim is the first of its kind developed and built in China. Henry Huang, chief technologist for WinDawn, explained that they were positioning their product which comes in 1 GB, 2 GB and 4 GB configurations as a backup for DRAM. With the NVDIMM backing up main memory, if the server should lose power the session and all of the data that was in memory could be recalled immediately and processing could resume. Although it may seem like a rather exotic implementation, it would certainly fit in well for some financial data processing such as trading systems.

Many of the technical experts at the Flash Memory Summit felt that NVDIMM was promising, but more as a concept perhaps than a product. The goal is to erase the line dividing memory and storage for a continuum of unfettered caching or tiering, but many in the business expect that it’s more likely that the goal will be realized when faster technologies emerge as NAND flash replacement, if those new techs can provide performance approaching DRAM speeds but at a reasonable cost.


August 10, 2014  6:50 PM

Survey says budgets dipping but interest in flash storage rising

Rich Castagna Rich Castagna Profile: Rich Castagna
AS400 physical file, Dell, EMC, Flash, flash storage, Fusion-io, Hitachi, HP, IBM, IOPS, Latency, NetApp, Pure Storage, Quality of Service, Solid-state storage, Storage, Violin Memory

IT budgets are declining on average, and while planned storage spending is dipping, too, it accounts for 13.5% of the overall IT budget. That figure, based on survey data collected by 451 Research’s TheInfoPro service for the first half of 2014, actually shows storage’s share of the budget grew from 9.5% during the same period last year.

And while the overall spending average shows a decline, a larger number of companies are planning to increase storage spending to some degree than those cutting back, according to a presentation delivered by TheInfoPro research analyst Nikolay Yamakawa during the recent Flash Memory Summit. (For more information on the survey, please read this 451 Research/TheInfoPro blog post.)

Survey taps mid-sized, enterprise companies

TheInfoPro surveyed 265 Global 2000 companies with revenues of at least $1 billion; survey respondents were split roughly down the middle between IT executives and managers and architecture and engineering specialists. When ask to rank their top storage projects for 2014, 8% of the respondents cited flash implementation—the first time solid-state-related activities appeared in the top five of the project list.

TheInfoPro graph

Source 451 Research, LLC. www.451research.com

Some of that flash storage is likely to be deployed to ease one of the key storage-related pain points noted by 21% of the survey respondents: “delivering storage performance.” Performance was the second biggest pain point, trailing only “rapid capacity growth.”

Databases loom as leading flash apps

For current flash users and those planning implementations, database applications loom as the likeliest candidates to get a boost from solid-state storage, as noted by 38% of respondents. Next in line for a flash jump start are virtual desktop infrastructure projects (19%) and analytics apps (16%).

Among the most desired features for flash storage implementations, Quality of Service (QoS) controls ranked highest with 74% saying it was very or extremely important. Tools to manage flash data’s lifecycle (72%) were next, followed by cache coherency management (56%).

But make no mistake, when it comes to flash storage the name of the game is speed. When asked if they had specific IOPS requirements, 73% said yes—a big jump from the 52% who said they were looking for a performance boost last year. Delving deeper into the need for speed, 47% said they looking to deliver more IOPS to their apps and 21% need to address latency issues.

Still limited use of caching apps

For only 40% of server-side solid-state deployments some type of caching software is being used, but the rest of the flash devices are begin used for persistent storage. Still, a lot of flash is being used as cache or as part of active automated tiering schemes, as 48% of current flash users say the continuously move data on/off solid-state storage. Fifty-nine percent of respondents who said they were using auto-tiering rated it a “success” with 32% indicating that they’re experiencing some stress in their tiering setups.

Where flash works best

The key question for many companies is not whether or not flash should be a part of their storage environments, but rather where to put that flash. TheInfoPro survey indicated that 67% of current users have solid-state installed in their SAN or NAS arrays (hybrid flash array), 25% have it slotted in servers and 8% are using all-flash arrays (AFAs). For future implementations, 12% are considering hybrids, 13% are looking server-side and a whopping 22% are aiming at AFAs.

For AFAs, EMC, Violin Memory and Pure Storage are the leaders among those already using these arrays, with EMC and Pure Storage appearing most often as choices for prospective implementations. EMC leads again in the already-implemented hybrid category, with NetApp, Hitachi, IBM, HP and Dell following. Not surprisingly, server-side flash pioneer Fusion-io still dominates that market segment.

Looking at some of the newcomers in the flash storage sphere, Nimble Storage, Pure Storage and Nimbus Data have gained the greatest awareness among users to date.

For more survey-based data and analysis on flash storage, read the analysis of the latest TechTarget Storage Purchasing Intentions Survey and our Snapshot Survey report, Use of solid-state technology continues to climb. And for information on a wide variety of solid-state storage product and implementation topics, please visit SearchSolidStateStorage.com.


August 7, 2014  10:35 AM

Appliances carry Symantec backup revenue

Dave Raffo Dave Raffo Profile: Dave Raffo
Backup Exec, NetBackup, Storage, Symantec

Symantec’s transformation towards an integrated backup appliance model accelerated last quarter, as revenue of its NetBackup appliances increased 35 percent over last year.

Symantec’s backup business results last quarter followed a familiar pattern. NetBackup sales increased, mainly on the strength of its appliances, while Backup Exec revenue dropped. The vendor did not break out its total backup revenue, although the Information Management category that backup is part of was flat from last year at $650 million.

The appliance business has grown rapidly since Symantec began selling its backup software on integrated hardware instead of relying on third-party disk targets. CFO Thomas Seifert pointed out that Symantec has gone from none of the backup appliance market to 38 percent in less than years, according to IDC’s research. Symantec is second behind EMC in backup appliance revenue.

But because Symantec does not give the total revenue for NetBackup, it’s impossible to say if it is adding new customers or switching over those who were already using its enterprise backup software.

Brown said Backup Exec sales were hurt by a pause in sales by channel partners ahead of the recently released Backup Exec 2014 for SMBs. However, Backup Exec sales have been in downfall since the poorly received Backup Exec 2012 came out two years ago. Symantec execs hope the new version will satisfy unhappy customers who refused to upgrade to BE 2012.

He said Symantec’s next step in backup will be towards the cloud. “We’ll be moving our products to the cloud to complement the strength we already have in our cloud-based archiving business,” Brown said.

It’s not clear if he was talking about both NetBackup and BE. Symantec discontinued its BackupExec.cloud service in January.

Brown said Symantec’s CEO search committee has narrowed its list to finalists and its goal is to reveal its choice by the end of September. He said the ideal candidate has experience in technology that is closely related to Symantec’s, has global operations background, a collaborative leadership style and has been CEO of a public company.

Brown has been interim CEO since the Symantec board fired Steve Bennett in March.


August 6, 2014  8:09 AM

Who needs Dell? Nutanix tripled revenue last quarter

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, Nutanix, Storage

Nutanix hasn’t been sitting idle waiting for its Dell OEM deal to kick in.

The hyper-converged system vendor today said it exceeded $50 million in revenue for the quarter that finished at the end of July. Nutanix said it is picking up larger customers, with 29 companies buying more than $1 million of Nutanix products and services. That number has more than doubled since January, when Nutanix had 13 million-dollar customers.

Nutanix, which raked in $101 million in funding in January, has more than 600 employees.

Nutanix SVP of product management Howard Ting said the vendor’s revenue more than tripled from the second calendar quarter of 2013 to the second quarter of this year. He attributed that mainly to increased brand recognition and the addition of new versions of its Virtual Compute Platform. Nutanix systems include storage, networking, and compute in one box. It started with one configuration, but late last year added entry level and data center models.

“Expansion of our platform really helped,” Ting said. “Three years ago when we came out, we had one product with a set amount of CPU, memory and disk. One reason we lost deals was because of product market fit – the customer’s workload wouldn’t fit on that platform. We didn’t have a storage-heavy appliance for databases or applications with large datasets like Exchange then. Now, we have a whole range of appliances, ranging from branch offices to more heavy data workloads.”

Ting expects to get another big bump from Dell, which in June entered an OEM deal with Nutanix. Ting said the vendors are on track to begin selling Dell hardware with Nutanix software beginning in October. Dell hasn’t released product specs yet, but Ting said Dell will eventually have “a full spectrum of products” incorporating Nutanix.

Ting said Nutanix is nibbling away at larger storage vendors such as EMC, NetApp, IBM and Hewlett-Packard, who have reported declining sales in recent quarters. “Large companies are starting to feel the impact,” he said. “The disruption created by young companies like Nutanix is eating into their revenue.


July 31, 2014  10:32 PM

IBM storage Fellow discusses flash strategy

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

IBM Fellow Andrew Walls thinks most if not all active data will eventually reside on flash storage, and when Walls speaks, IBM Storage listens.

Walls is responsible for setting the strategy for the company’s storage portfolio and defining the architecture for next-generation flash arrays and storage class memories. He received the highest distinction of his career this year when IBM named him a fellow, making him one of only 257 total and 87 active IBM employees (out of more than 400,000) to achieve the honor.

On the eve of next week’s Flash Memory Summit in Santa Clara, California, Walls took time out today to discuss IBM’s flash strategy with SearchStorage.com. Interview excerpts follow:

What do you see as the optimal architecture for next-generation flash?

Walls: I’ve been with the acquisition of Texas Memory Systems since its beginning, and before that, I was the chief architect and CTO for our flash strategy. Through the years, we’ve adapted to what’s been happening, and I think the tipping point is there, and we really are at a point where data reduction combined with the technology that we have today can be used to put all active data, or most active primary data, on flash.

So, I see the future being to continue that reduction in overall cost per gigabyte, based on data reduction and next-generation flash, as well as enabling things like [triple-level cell] TLC, if possible, to continue lowering the cost, but to do that perhaps as a tiering strategy and to also look at next-generation storage class memories also with tiering and decrease the latency by being able to use phase-change memory or resistive RAM or next-generation storage class memories as the tier for hot data.

I see continuing with certainly the [multi-level cell] MLC flash that is there, continuing to reduce the cost, do more features and data reduction, but then looking at other technologies to see if they also can be used in the all-flash arrays to improve the performance and further decrease the cost.

Do you think TLC is realistic for enterprise storage? Will it be TLC or 3D NAND?

Walls: I think 3D NAND for sure is going to come in. Back in 2008, 2009, we were working closely with different companies, and we said MLC was going to come into the enterprise. There were a lot of people who said, ‘No, it’s going to be [single-level cell] SLC for a long time.’ And we were the first to really bring MLC into enterprise storage.

When I look at TLC, it’s even more of a challenge, of course. You’re talking about in some cases a few hundred cycles. But, we are looking to see if we can bring it in in innovative ways . . . You could think of maybe read-mostly applications or a tiered architecture where most of the hot accesses are serviced out of DRAM or out of MLC, and you’d have some TLC. We think that the benefits are enough that it really [merits] a serious look to see if it can also be used to further reduce the costs.

IBM’s all-flash arrays use eMLC flash in contrast to a lot of purpose-built flash arrays that use cheaper MLC drives. How important is the type of flash these days now that manufacturers have figured out ways to improve the reliability and endurance? Why is IBM still using eMLC?

Walls: It is true to a certain extent that the flash manufacturers have figured out how to improve the endurance of the devices themselves. However, as the geometries continue to shrink, the endurance that you get out of the 20-nanometer and 15- or 16-nanometer bare MLC flash is only 3,000 write/erase cycles. That’s all that the manufacturer will guarantee.

So, we believe that in this generation with the [FlashSystem] 840 that the eMLC allows us . . . to be able to get a 10x improvement in endurance without having to worry about it and pass that on to our customers. We think eMLC right now is a very valuable add, and other competitors use it as well. There are many who don’t, but I think one has to be careful to see how they make sure that they aren’t going to wear out. We believe eMLC right now is an important part of our strategy.


July 31, 2014  2:52 PM

Categorizing solid state storage systems

Randy Kerns Randy Kerns Profile: Randy Kerns
Solid-state storage, Storage

There are many types of implementations of solid state or flash storage systems.  At Evaluator Group, we regularly field questions and work on projects regarding solid state storage with our IT clients. In addition to the performance explanations and evaluation guide for solid state, we find it necessary to categorize the different implementations to aid their understanding.

The categorizations do not necessarily match what vendors would say in positioning their products. The important point is categorization has served us well in communicating to our IT clients.  However, we understand that nothing is static in the area of storage.  Like the technology, these explanations will evolve with new developments.

Here are the categories and explanations that have worked well so far:

  1. All-solid state (flash) storage systems – These are new systems designs for solid state from the start. These designs optimize performance for the given amount of system hardware.
  2. Hybrid arrays in a new design system – Hybrid arrays use both solid state (usually in the form of Flash SSDs) and hard disk drives (HDDs) with the idea that large capacity HDDs will decrease the overall price of the system.  As a new design, all I/O goes through the SSDs and the HDDs serve as backing storage.
  3. All-solid state (flash) storage systems based on traditional storage systems with major modifications – These are traditional storage systems designed for spinning disks but modified to take advantage of solid state with the addition of embedded software. The Evaluator Groups looks at the design changes made to determine the significance.
  4. Hybrid arrays based on traditional storage systems – This large segment includes the traditional storage systems designed for spinning disks where solid state drives (SSDs) are added for cache and/or tiered storage.  In these systems, small percentages of SSDs will max out the performance of the system quickly, increasing the aggregate system performance by 2x to 4x.
  5. As technology evolves, there will be changes to these categories. Certainly, acquisitions will occur – changing what vendors offer and product positioning.  Over time, more extensive changes will be made to traditional systems that are limited by their spinning disk designs.

The biggest evolution of all will be the introduction of new solid state technology. Forward-thinking system designers have anticipated this and will seamlessly (and optimally) advance to the new technology when the economics are favorable.  This is one of the reasons we use the solid state storage terminology rather than wholly referring to only the current implementation of NAND flash.  We will adapt out categorization used with our IT clients to fit the current implementation. Meanwhile, it is great to see the continued advances in technology and implementations for storage systems.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 30, 2014  3:36 PM

Overland builds higher performance, snapshots into its GuardianOS

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Overland Storage, Storage

Overland Storage has upgraded its GuardianOS operating system, which powers its SnapServer Dx1 and Dx2 NAS, to make its devices more compatible with Windows in mixed environments. The upgrade is also designed to boost performance, and includes a new BitTorrent-powered sync-and-share feature for mobile devices.

The GuardianOS integrates replication, thin provisioning, snapshots, backup, file sharing and security for the SnapServer Dx1 and Dx2. The Dx1 is a 1U system that scales to 160 TB while the Dx2 is a 2U server that scales to 384 TB.

The software Windows-only Tree improves permission handling and authentication in Windows and Mac mixed environments. Each time a Window or Mac user opens a file, the updated file will be written with Windows data attributes. Typically, a Mac system  will switch the data attributes when a file is open for updates.

“They will remain in Windows attributes because you want to keep a certain attribute type,” aid Jeremy Zuber, Overland’s product marketing manager. “If attributes are flip-flopped, you can run into issues.”

The GuardianOS also has been enhanced with the Lightweight Directory Access Protocol (LDAP), allowing administrators to set permissions and specify access to directories through name lookup to and from a unique user identifier (UID).  The software also uses  Server Message Block (SMB) 2.0 for improved read and write performance for Windows clients and servers when accessing SnapServer storage.

The operating system’s snapshot capability has been upgraded for higher performance with a more efficient copy-on-write process.


July 25, 2014  1:51 PM

Unitrends adds Hyper-V support to re-branded virtual backup

Dave Raffo Dave Raffo Profile: Dave Raffo
PHD Virtual, Storage, Unitrends

Unitrends has gone GA with its first new version of the PHD Virtual Backup application since acquiring PHD Virtual last December. Virtual Backup 8 released last week gained support for Microsoft Hyper-V, and lost the PHD brand.

The product is now called Unitrends Virtual Backup. It joins a roster of Unitrends branded software that includes Unitrends Enterprise Backup (UEB) and Unitrends ReliableDR (previously a PHD product), and the Unitrends Certified Recover Suite that bundles the other three apps. Unitrends also sells a series of integrated appliances that runs UEB software.

Unitrends claims Virtual Backup 8 has more than 140 enhancements, most of them around making the product easier to use in hopes of taking on Veeam Software for virtual backups.

“Simplilcity has always been the No. 1 reason customers pick us,” said Joe Noonan, Unitrends senior product manager. “As we gain larger customers and get into deployments of 1,000 VMs or more, simplicity takes on new requirements.”

The application re-design includes the ability to backup and recover data in four clicks. It is also the first version of Virtual Backup with Microsoft Hyper-V support built in. PHD launched a separate version for Hyper-V earlier this year, but Virtual Backup now supports VMware, Citrix and Microsoft hypervisors in one application.

Virtual Backup does not yet work with Unitrends integrated backup appliances yet, but Noonan said that is on the roadmap. “The first step is to centrally manage everything,” he said. “Then we will integrate it all under the hood.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: