Storage Soup


March 14, 2014  8:44 AM

Avere upgrades its FXT high performance system

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Avere Systems Inc. has added a new device to its NAS optimization FXT 4000 series that has higher capacity and better I/O performance.

The FXT 4800  is an upgrade to the company’s all-flash FXT 4500 Edge filer, using 400 GB solid state drives (SSDs) compared to the latter’s 200 GB SSDs. The 4800 also comes with a higher performance CPU, which the company claims improves I/O performance by 40 percent.

“The typical customer that buys the 4000 series don’t want to deal with spinning disk,” said Jeff Tabor, director of product marketing at Avere. “We are seeing environments using the 4000 have a lot of legacy NAS and they need a performance boost. Rather than add flash to their storage systems, they find it more cost effective to add a performance tier.”

Avere’s FXT Edge filer appliances use a combination of DRAM, nonvolatile random access memory,SSDs and hard disk drives to accelerate performance of other vendors’ NAS nodes. The Avere FXT 4000 series is designed for sequential high-capacity storage requirements, with the FXT 3000 series for random I/O performance.

Avere’s FXT filers reside between client workstations and core filers such as EMC Isilon, NetApp and Hitachi NAS to optimize NAS via a global file system.

A single FXT 4800 device can hold 12, 400 GB SSDs and can scale up to 240 Terabytes when 50 nodes are clustered. Two 4800 nodes can run at 140k I/Os per second, while 50 nodes can run up to 3.5 million I/Os per second.

Comparatively, one FXT 4500 node holds 15, 200 GB SSDs that can scale up to 3.0 Terabytes per node. Fifty clustered nodes scales up to 150 Terabytes and operates up to 2.5 million I/Os per second.

The FXT 4800 is available now.

March 13, 2014  10:15 AM

NetApp plans more cuts as market share drops

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp is planning to cut about 600 employees over the next year, following a 900-headcount reduction in 2013.

The vendor disclosed its plans in an SEC filing Wednesday, claiming it will cost about $35 million to $45 million for employee terminations and other costs. The 600-person reduction would amount to about five percent of NetApp’s total employees, and follows NetApp’s disappointing earnings and even more disappointing forecast given last month. Like its larger rival EMC, NetApp is feeling the sting of cautious IT spending as well as storage buying patterns that are changing due to the cloud, flash and other new technologies.

NetApp is not alone in feeling the heat. EMC in January said it would cut its staff by around 1,000. But NetApp sales have taken a bigger hit.

According to IDC’s latest storage tracker numbers, NetApp grew revenue by 1.5 percent from the fourth quarter of 2012 to the fourth quarter of 2013. That was below the external storage industry growth of 2.4 percent. NetApp’s market share dropped from 11.6 percent to 11.5 percent – placing it third behind leader EMC and IBM. EMC grew 9.9 percent in the fourth quarter and had 32.1 percent of the market. NetApp did outperform IBM and No. 5 Hitachi Data Systems, which both declined in revenue from the previous year. No. 4 Hewlett-Packard gained 6.5 percent but remains behind NetApp with 9.6 percent of the market.

In a report on NetApp today, Wunderlich Securities analyst Kaushik Roy wrote that the vendor’s biggest challenge comes from new increased competition from cloud and flash vendors.

“The biggest risk to NetApp is the new technologies that are disruptive to its existing products and the emerging storage companies that are gaining traction,” Roy wrote. “A large part of the non-mission-critical storage market is moving to the cloud. SMBs are increasingly using cloud-based compute and storage infrastructure … provided by vendors such as Amazon, Google, Microsoft and others who are using commodity hardware to build the infrastructure as a platform.”

Roy also wrote that NetApp’s investment in new technology has lagged its rivals, pointing out that NetApp does not yet have an all-flash array built from the ground up. NetApp does sell E-Series all-flash arrays for the high performance market, but its general purpose FlashRay all-flash system is not yet generally available while all other major storage vendors and a bunch of startups are selling all-flash systems.


March 12, 2014  7:39 AM

Condusiv brings central control for I/O optimization

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Condusiv Technologies recently upgraded its V-locity caching software with a central management console to manage I/O performance in physical and virtual environments from the application layer down to the storage.

V-locity takes a different approach to caching. The company’s V-locity software is designed for Microsoft environments and resides in the physical server or virtual machine to provides intelligence to the operating system so it can more efficiently create I/O performance. The idea is to solve any performance bottlenecks without the need to add more hardware, such as solid-state drives (SSDs) or PCIe flash cards.

“There is an I/O explosion that is being created with the data explosion. We sit close to the application so that performance problems are solved right away,” said Robert Woolery, Condusiv’s chief marketing officer. “Everything from the host, the hypervisor, the network and storage get the benefit of faster I/O.

Woolery said the Microsoft operating system has inefficient write operations. It breaks up a file during the write process, so each piece of that file is associated with an I/O operation. That translates into numerous I/O operations for the storage. V-locity uses what the company calls a behavior analytics engine that gathers data on an application and operating system, which then is used to help the application optimize the I/Os.

“Based on the behavior, we tell the application a better way to optimize the I/Os. It tells the operating system not to break up a file and just sent it as one I/O,” said Woolery. “The I/Os are optimized to the way the applications likes its blocks sized. Some applications are optimized for different block sizes.”

V-locity also works on inefficient read operations. Instead of reading the same file over and over, data is stored on an intelligent cache and serves up files base on usage time and frequency.  The company claims it helps eliminate unnecessary I/O on the storage layer and improves response time by at least 50 percent.

Woolery said most server-side caching does predictive analysis based on the data just received.

“They don’t know,” said Woolery. “They are guessing at it. You could be wrong and you don’t care if that involves just one I/O But if you guess wrong, you waste a lot resources on I/Os that you don’t need.”

The latest version of V-locity now has a central control management console so the the software can be deployed via only one product.


March 11, 2014  11:29 AM

Nimble adds greater insight to InfoSight

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage, growing at a greater rate than its larger competitors, this week enhanced its analytics program that is considered a leader in the storage world.

Nimble added correlation capabilities to its InfoSight analytics and monitoring program. InfoSight now tracks key performance data to find a potential cause of problems inside the array and over the network. The program automatically notifies the customers of potential problems so they can take action before the problems deepen.

InfoSight, launched in April, 2013, collects performance, capacity, data protection and system health information for proactive maintenance. Customers can access the information on their systems through an InfoSight cloud portal. It can find problems such as bad NICs and cables, make cache and CPU sizing recommendations and give customers an idea of what type of performance they can expect from specific application workloads.

“We used to point out things like the customer has challenges around latency on the volume level. Now if you have a latency spike, we tell you why you are experiencing that spike,”said Radhika Krishnan, Nimble’s VP of solutions and alliances. “We’re getting more granular. We can make precise recommendations on how to fix the problem and give customers a way to work around it – for example, turn off the cache for a particular volume or add cache controller or capacity to improve performance.”

Ed Rollinson, head of IT for British-based marketing services firm Brightsource, said he found InfoSight a pleasant surprise after installing two Nimble Storage arrays to replicate databases for disaster recovery.

“Those reports are incredible powerful,” he said. “We can just click a few buttons and have a report showing us all information we need to predict where you can go in the future for capacity planning. You need to be aware that your replication data sets are getting larger, therefore, if you want to meet RPOs [recovery point objectives], you need to increase your bandwidth. When I tell my directors I want to increase bandwidth, they will ask for all sorts of facts and figures about how and why. Of course, I can work that out, but it will take time. It helps that InfoSight can produce that information for me.”

Nimble is also extending InfoSight to resellers. If customers give their resellers permission, the reseller can access their information inside Nimble’s database and forecast potential problems.


March 7, 2014  6:13 PM

Slumping Violin Memory prepares a new tune

Dave Raffo Dave Raffo Profile: Dave Raffo

All-flash array vendor Violin Memory recorded less revenue and a great loss than expected last quarter. Still, its earnings report was better than the previous quarter.

Violin’s first quarterly earnings report as a public company last November was a train wreck. It’s poor revenue and guidance surprised investors and the storage industry, causing its stock price to plummet and a large investor to call for the company to put up a “for sale” sign. The board fired CEO Don Basile less than a month later and hired Kevin DeNuccio to replace him in early February.

Violin Thursday reported revenue of $28 million for the fourth quarter of 2013 and $108 million for the year. Its losses were $56 million for the quarter and $150 million for the year. Quarterly revenue was up 22 percent from last year but down from $28.3 million from the previous quarter. For the year, revenue increased 46 percent over 2012 but the quarterly and yearly losses were greater than the previous quarter and year.

DeNuccio outlined his plans for a turnaround on Thursday’s earnings call. Those plans consist mainly of reducing expenses by selling off its PCIe flash business and cutting staff related to that business. DeNuccio has revamped the Violin management team. He brought in Eric Herzog from EMC to head marketing and business development and Tim Mitchell from Avaya to take over global field operations.

DeNuccio said Violin will have new flash hardware and software products in the next few months. “We expect to make one of the most significant product announcements in our history,” he said.

He defended the decision to sell the PCIe business launched a year ago by saying “It was clear that we grew too much, too fast. Now it’s a matter of how do we get the company into a size that is manageable, and how do we focus on an area that we are successful in?”

Violin was the all-flash array revenue leader in 2012 according to Gartner, but new entrees from large players such as EMC, NetApp, Hitachi Data Systems, IBM, Dell and Hewlett-Packard changed the market in 2013.

“We have formidable competitors,” DeNuccio said. “We’re at the top of the pyramid, and we compete with the big boys. But we’re confident that our technology is unique enough and we can establish ourselves running the critical applications for our customers to allow us to compete at that level.”


March 6, 2014  12:46 PM

Cloud-to-cloud backup vendor discloses what data it can’t protect

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud-to-cloud backup vendor Spanning wants you to know that there is information that its Backup for Google Apps cannot protect. And neither can its competitors protect those files.

The latest version of Spanning Backup for Google Apps launched this week includes a status reporting feature that shows customers problems with the most recent backup. This report includes data that cannot be backed up because of limitations in the Google API that affect files such as Google Forms and scripts.

“Customers need to trust that data will be there when they restore,” said Mat Hamlin, Spanning’s director of product management. “We’re now providing granular insight into each user’s data so administrators can understand what data has been backed up and what data has not been backed up. Third-party files, Google Forms and scripts are not available for us to back up. Customers may not be aware of that. When they come to us to back up all the data, the expectation is we will back up all that data. We want them to know what we cannot back up.”

Google Apps and Salesforce.com are the chief software-as-a-services (SaaS) apps protected by cloud-to-cloud backup vendors.

Spanning’s new report also brings other problems to customers’ attention so they can take action. It flags zero byte files that could indicate corrupt files, and points out temporary problems that are likely to be resolved within two or three days.

Hamlin said the data that cannot be backed typically up makes up a small percentage of data in Google Apps. He said Spanning is coming clean to add transparency, both for Spanning and competitors. He said Spanning often tells customers up front about the limitations, but competitors will not admit those limitations.

Ben Thomas, VP of security of Spanning’s chief competitor Backupify, said Backupify for Google Apps runs into the same problems. However, he said there are ways to minimize these limitations.

“We do have similar things we run into,” Thomas said. “Some cloud systems, whether it’s Google or Salesforce or other apps, may not have API calls available to pieces of data. Some API calls may be throttled, so only so many API calls per hour or per day can be made. We’ve been smart over the years about the way we manage throttling. For example, Google will throttle the amount of data per day per e-mail. The limit is a 1.5 gigabyte a day now. If we’re continually hitting that limit, we scale ourselves back to meet that. And we do that for every API.”


March 5, 2014  4:49 PM

Rackspace buys into Gen5 (16-gig) Fibre Channel

Dave Raffo Dave Raffo Profile: Dave Raffo

Storage vendors are counting on cloud storage providers more and enterprises less for large implementations these days. Rackspace’s new SAN design is a prime example of that.

Rackspace upgraded to Brocade Gen5 (16 Gbps) Fibre Channel (FC) switching and EMC VMAX enterprise array  to achieve greater performance and density in its data centers around the world, according to Rackspace CTO Sean Wedige. Rackspace previously used Brocade’s 8 Gbps switches, but redesigned its SAN to take advantage of the extra bandwidth and density of the new gear.

“Storage is a big driver for us,” Wedige said. “Our customers’ storage is growing exponentially. It dwarfs what we saw just two years ago. We’re looking to increase our densities and speed.”

Rackspace’s new SAN set-up links Brocade 8510 Backbone director switches with Brocade’s UltraScale Inter-Chassis Links (ICLs), and its ports are connected to EMC VMAX enterprise storage arrays and Dell servers. Using four 16 Gbps FC cables gives Rackspace 64 Gbps of connectivity between directors. UltraScale ICLs can connect 10 Brocade DCX 8510 Backbones.

Rackspace also uses EMC VNX and Isilon storage.

Wedige said Rackspace runs FC SANs in seven of its eight data centers around the world, and will eventually add FC to its Sydney, Australia data center too. He said Rackspace has multiple petabytes of storage and thousands of ports, and Brocade’s Gen5 switching enables more ports per square foot in the data center.

“The bulk of our large customers are using a SAN,” he said. ““We use SANs for customers who are looking for dedicated infrastructure, high performance and fault tolerance.”

Wedige said the new design lets Rackspace connect servers to storage that is in different data centers, giving the hosting company more flexibility and better port utilization.

He said, like all of Rackspace’s storage infrastructure, the new SAN design withstood thorough testing before it was deployed.

“One of our biggest challenges is, because of our scale, we tend to break a lot of things,” he said. “Vendors appreciate that we put stuff to the test, but a lot of it may not be as suitable as we’d like for our environment.”


March 4, 2014  8:29 AM

Defining storage virtualization

Randy Kerns Randy Kerns Profile: Randy Kerns

The term storage virtualization has been with us since 1999, and the concept continues with new product offerings that are variations of the original.

The longevity of storage virtualization in a high tech world where new ideas gain a foothold rapidly is a testament to the value that storage virtualization delivers. But there are many descriptions for storage virtualization based on the variety of products and the desire of vendor marketing to distinguish their products. A quick review of what is encompassed by the general phrase “storage virtualization” might be useful to characterize these offerings and the context in which they are typically used.

First, let’s look at the descriptions of virtualization:

  • Grouping or pooling of resources for greater resource utilization.
  • Abstraction to enable storage management at a higher level. This includes the promised ability to automate actions across the virtualized resources and the ability to use the same management tools across heterogeneous devices.
  • Applying advanced features such as remote replication and point-in-time copies (snapshot) across the aggregated, abstracted resources without having to use multiple, device-specific capabilities.
  • Distribution of data to aid in performance.  This may be for parallel access or load balancing.
  • Transparent migration of data between LUNs and storage systems for purposes such as asset retirement, technology upgrades, and load and capacity balancing.

The different types for storage virtualization are depicted with this graphic:

storagevirt1

There are preconceptions about the term storage virtualization that exist primarily because of the success of products and product marketing.  Most understand storage virtualization to be block virtualization where LUNs are presented to attached hosts that are constituted from multiple storage resources that may be from different storage systems and vendors.  The next preconception is that the block virtualization is done in-band (in the data path), which again comes from the success or predominance of those types of solutions.  There are several locations where the virtualization can occur either in-band or out-of-band.

storagevirt2

The attachment of external storage systems/arrays by other storage systems (storage system-based virtualization) has been commonly deployed with many vendors under different names.  Using an appliance in the data path (in-band) is the most prevalent method of storage virtualization. And abstracting the access to the storage through software installed on servers is another approach that is usually out-of-band from the data access standpoint but is done through the control of where the access is targeted.

Confusion remains around storage virtualization as vendors try to highlight different characteristics their products bring to customers. Ultimately, storage virtualization is all about the value delivered.  It can affect the capital expense with greater resource utilization and greater performance.  It can have a measurable effect on the operational expense with management costs, licensing costs for advanced features, and the ability to transparently migrate data between systems.  IT must look beyond the labels applied to the solution and focus on the value the solution brings.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 28, 2014  6:17 PM

Nexenta enhances its open storage platform

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Nexenta this week enhanced its open source ZFS-based NexentaStor storage software that includes a code-based migration to the illumos open-source operating system, support for 512 GB memory per cache head, and a faster high availability process.

The NexentaStor is a unified storage platform that supports Fibre Channel and iSCSI for block storage and NFS, CIFS and SMB for files across active/active controllers. It delivers data services such as unlimited snapshots, clones, thin provisioning, inline deduplication, compression and replication across hard drives, all-solid state drives (SSDs) and hybrid configurations.

Thomas Cornely, Nexenta’s vice president of product marketing, said the move to illumos will give customers more flexibility and less vendor lock-in. The company’s customer base swings from those that store from 18 TB to petabytes of data. Nexenta has 2,500 paying customers, 1,000 of which are hosting providers.

“Nexenta 4.0 is a good foundation to expand our target market, which is small configurations and ever-growing bigger configurations. We don’t see much in the middle,” Cornely said.

The latest version of NexentaStor boosts the ability to fail over by 50 percent because the process has been streamlined and fewer steps are involved. This enhancement is particularly important for multi-petabyte configurations where hundreds of drives are involved.

“The enhancements are in multi-threading,” Cornely said.

Another uptime-boosting enhancement is with the Fault Management Architecture (FMA), which intelligently detects failing hardware to reduce application interruptions. When a drive gets slow or is not working well, the FMA capability helps take it out of the RAID storage group.

NexentaStor 4.0 also now supports Sever Message Block (SMB) 2.1 for Microsoft Windows Server 2012 and cloud environments.


February 28, 2014  11:24 AM

Nimble Storage avoids damage from IT spending cuts

Dave Raffo Dave Raffo Profile: Dave Raffo

In its first quarter as a public company, Nimble Storage sidestepped the IT spending slowdown that its larger competitors say have hampered sales.

Nimble reported $41.7 million in revenue last quarter, more than double from its $20.2 million in the same quarter the previous year. Nimble’s $126 million revenue last year also more than doubled from $54 million the previous year. For this quarter, Nimble expects revenue in the range of $42 million to $44 million, roughly double the $21.1 million it generated a year ago.

In comparison, EMC’s storage product revenue grew 10 percent year-over-year last quarter. NetApp’s FAS and E Series revenue declined five percent, Hewlett-Packard’s storage revenue stayed the same despite a substantial increase in its 3PAR storage and IBM’s storage revenue declined 13 percent.

Part of the reason for Nimble’s rapid growth is its revenue is still tiny compared to the giants it competes with. It also sells mostly lower-priced systems to smaller companies — many of whom are adding Nimble arrays to earlier implementations — rather than large enterprises that take longer to make purchases. Nimble’s average deal price is under $70,000, but it also reported an increase in deals of more than $100,000 last quarter.

Nimble claims it added 527 new customers last quarter, bringing its total to 2,645.

EMC and NetApp executives talked about cautious IT spending and longer evaluation cycles during their earnings calls, but Nimble VP of marketing Dan Leary said his company has not run up against those trends while selling its iSCSI hybrid flash arrays.

“We haven’t seen those headwinds in our business,” he said. “We’re winning deals because we’re delivering better performance and better capacity with about one-third to one-fifth the amount of hardware that our competitors require. The primary thing limited our growth is the ability to hire and recruit headcount, and that’s why we’re investing heavily in the company. We’re not seeing any market limitations.”

That investment is also part of the reason Nimble is still losing money. It lost $13 million last quarter and $43 million for the year. The losses are expected to continue at least into late 2015, but CEO Suresh Vasudevan said the company will continue to invest and grow, and has around $208 million in cash.

On the product front, Nimble is investing in more enterprise features. It added capabilities to its CASL operating system last year that allows customers to set up clusters in scale-out arrays. Another item on the roadmap is Fibre Channel support. Vasudevan said on the earnings call that Fibre Channel support is planned for late this year to help win deals at large companies already invested in the protocol.

He said Nimble currently wins about 40 percent of its deals against FC SAN arrays now, but “at the same time, there are several large enterprises that have already made an investment in Fibre Channel and that becomes a stumbling block for us.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: