March 5, 2014 4:49 PM
Posted by: Dave Raffo
Storage vendors are counting on cloud storage providers more and enterprises less for large implementations these days. Rackspace’s new SAN design is a prime example of that.
Rackspace upgraded to Brocade Gen5 (16 Gbps) Fibre Channel (FC) switching and EMC VMAX enterprise array to achieve greater performance and density in its data centers around the world, according to Rackspace CTO Sean Wedige. Rackspace previously used Brocade’s 8 Gbps switches, but redesigned its SAN to take advantage of the extra bandwidth and density of the new gear.
“Storage is a big driver for us,” Wedige said. “Our customers’ storage is growing exponentially. It dwarfs what we saw just two years ago. We’re looking to increase our densities and speed.”
Rackspace’s new SAN set-up links Brocade 8510 Backbone director switches with Brocade’s UltraScale Inter-Chassis Links (ICLs), and its ports are connected to EMC VMAX enterprise storage arrays and Dell servers. Using four 16 Gbps FC cables gives Rackspace 64 Gbps of connectivity between directors. UltraScale ICLs can connect 10 Brocade DCX 8510 Backbones.
Rackspace also uses EMC VNX and Isilon storage.
Wedige said Rackspace runs FC SANs in seven of its eight data centers around the world, and will eventually add FC to its Sydney, Australia data center too. He said Rackspace has multiple petabytes of storage and thousands of ports, and Brocade’s Gen5 switching enables more ports per square foot in the data center.
“The bulk of our large customers are using a SAN,” he said. ““We use SANs for customers who are looking for dedicated infrastructure, high performance and fault tolerance.”
Wedige said the new design lets Rackspace connect servers to storage that is in different data centers, giving the hosting company more flexibility and better port utilization.
He said, like all of Rackspace’s storage infrastructure, the new SAN design withstood thorough testing before it was deployed.
“One of our biggest challenges is, because of our scale, we tend to break a lot of things,” he said. “Vendors appreciate that we put stuff to the test, but a lot of it may not be as suitable as we’d like for our environment.”
March 4, 2014 8:29 AM
Posted by: Randy Kerns
The term storage virtualization has been with us since 1999, and the concept continues with new product offerings that are variations of the original.
The longevity of storage virtualization in a high tech world where new ideas gain a foothold rapidly is a testament to the value that storage virtualization delivers. But there are many descriptions for storage virtualization based on the variety of products and the desire of vendor marketing to distinguish their products. A quick review of what is encompassed by the general phrase “storage virtualization” might be useful to characterize these offerings and the context in which they are typically used.
First, let’s look at the descriptions of virtualization:
- Grouping or pooling of resources for greater resource utilization.
- Abstraction to enable storage management at a higher level. This includes the promised ability to automate actions across the virtualized resources and the ability to use the same management tools across heterogeneous devices.
- Applying advanced features such as remote replication and point-in-time copies (snapshot) across the aggregated, abstracted resources without having to use multiple, device-specific capabilities.
- Distribution of data to aid in performance. This may be for parallel access or load balancing.
- Transparent migration of data between LUNs and storage systems for purposes such as asset retirement, technology upgrades, and load and capacity balancing.
The different types for storage virtualization are depicted with this graphic:
There are preconceptions about the term storage virtualization that exist primarily because of the success of products and product marketing. Most understand storage virtualization to be block virtualization where LUNs are presented to attached hosts that are constituted from multiple storage resources that may be from different storage systems and vendors. The next preconception is that the block virtualization is done in-band (in the data path), which again comes from the success or predominance of those types of solutions. There are several locations where the virtualization can occur either in-band or out-of-band.
The attachment of external storage systems/arrays by other storage systems (storage system-based virtualization) has been commonly deployed with many vendors under different names. Using an appliance in the data path (in-band) is the most prevalent method of storage virtualization. And abstracting the access to the storage through software installed on servers is another approach that is usually out-of-band from the data access standpoint but is done through the control of where the access is targeted.
Confusion remains around storage virtualization as vendors try to highlight different characteristics their products bring to customers. Ultimately, storage virtualization is all about the value delivered. It can affect the capital expense with greater resource utilization and greater performance. It can have a measurable effect on the operational expense with management costs, licensing costs for advanced features, and the ability to transparently migrate data between systems. IT must look beyond the labels applied to the solution and focus on the value the solution brings.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
February 28, 2014 6:17 PM
Posted by: Sonia Lelii
open source storage
Nexenta this week enhanced its open source ZFS-based NexentaStor storage software that includes a code-based migration to the illumos open-source operating system, support for 512 GB memory per cache head, and a faster high availability process.
The NexentaStor is a unified storage platform that supports Fibre Channel and iSCSI for block storage and NFS, CIFS and SMB for files across active/active controllers. It delivers data services such as unlimited snapshots, clones, thin provisioning, inline deduplication, compression and replication across hard drives, all-solid state drives (SSDs) and hybrid configurations.
Thomas Cornely, Nexenta’s vice president of product marketing, said the move to illumos will give customers more flexibility and less vendor lock-in. The company’s customer base swings from those that store from 18 TB to petabytes of data. Nexenta has 2,500 paying customers, 1,000 of which are hosting providers.
“Nexenta 4.0 is a good foundation to expand our target market, which is small configurations and ever-growing bigger configurations. We don’t see much in the middle,” Cornely said.
The latest version of NexentaStor boosts the ability to fail over by 50 percent because the process has been streamlined and fewer steps are involved. This enhancement is particularly important for multi-petabyte configurations where hundreds of drives are involved.
“The enhancements are in multi-threading,” Cornely said.
Another uptime-boosting enhancement is with the Fault Management Architecture (FMA), which intelligently detects failing hardware to reduce application interruptions. When a drive gets slow or is not working well, the FMA capability helps take it out of the RAID storage group.
NexentaStor 4.0 also now supports Sever Message Block (SMB) 2.1 for Microsoft Windows Server 2012 and cloud environments.
February 28, 2014 11:24 AM
Posted by: Dave Raffo
In its first quarter as a public company, Nimble Storage sidestepped the IT spending slowdown that its larger competitors say have hampered sales.
Nimble reported $41.7 million in revenue last quarter, more than double from its $20.2 million in the same quarter the previous year. Nimble’s $126 million revenue last year also more than doubled from $54 million the previous year. For this quarter, Nimble expects revenue in the range of $42 million to $44 million, roughly double the $21.1 million it generated a year ago.
In comparison, EMC’s storage product revenue grew 10 percent year-over-year last quarter. NetApp’s FAS and E Series revenue declined five percent, Hewlett-Packard’s storage revenue stayed the same despite a substantial increase in its 3PAR storage and IBM’s storage revenue declined 13 percent.
Part of the reason for Nimble’s rapid growth is its revenue is still tiny compared to the giants it competes with. It also sells mostly lower-priced systems to smaller companies — many of whom are adding Nimble arrays to earlier implementations — rather than large enterprises that take longer to make purchases. Nimble’s average deal price is under $70,000, but it also reported an increase in deals of more than $100,000 last quarter.
Nimble claims it added 527 new customers last quarter, bringing its total to 2,645.
EMC and NetApp executives talked about cautious IT spending and longer evaluation cycles during their earnings calls, but Nimble VP of marketing Dan Leary said his company has not run up against those trends while selling its iSCSI hybrid flash arrays.
“We haven’t seen those headwinds in our business,” he said. “We’re winning deals because we’re delivering better performance and better capacity with about one-third to one-fifth the amount of hardware that our competitors require. The primary thing limited our growth is the ability to hire and recruit headcount, and that’s why we’re investing heavily in the company. We’re not seeing any market limitations.”
That investment is also part of the reason Nimble is still losing money. It lost $13 million last quarter and $43 million for the year. The losses are expected to continue at least into late 2015, but CEO Suresh Vasudevan said the company will continue to invest and grow, and has around $208 million in cash.
On the product front, Nimble is investing in more enterprise features. It added capabilities to its CASL operating system last year that allows customers to set up clusters in scale-out arrays. Another item on the roadmap is Fibre Channel support. Vasudevan said on the earnings call that Fibre Channel support is planned for late this year to help win deals at large companies already invested in the protocol.
He said Nimble currently wins about 40 percent of its deals against FC SAN arrays now, but “at the same time, there are several large enterprises that have already made an investment in Fibre Channel and that becomes a stumbling block for us.”
February 25, 2014 4:10 PM
Posted by: Sonia Lelii
Flash appliance vendor Astute Networks today announced the latest version of its ViSX G4 operating system that now supports NFS and OpenStack Block Storage (“Cinder”) for private and public clouds.
Keith Klarer, Astute founder and vice president of engineering, said ViSX can be used as an NFS target within OpenStack or as a block iSCSI device within OpenStack. The new version,ViSX OS 5, gives Astute cloud support it previously lacked.
“This is the first integration of our service into the cloud,” Klarer said. “We are starting with OpenStack because we think cloud OpenStack has the biggest opportunities. Rackspace is OpenStack-based and so is Dell and IBM SoftLayer. ”
An OpenStack Cinder compute plug-in can be downloaded and it talks directly to the ViSX platform. The Cinder features are managed directly by an OpenStack interface. Previously, the ViSX interface had to be used.
“OpenStack has its own interface,” Klarer said. “You don’t have to manage separately from OpenStack. OpenStack can directly control our applications.”
Klarer said virtualized applications that are moved to a private or public cloud need high-performance storage. Within OpenStack, data storage space can be defined and allocated to applications and then quickly provision high-performance storage in virtualized and cloud environments.
“OpenStack can directly control applications and anything you need to do for management,” Klarer said.
Other benefits to the OpenStack support include the ability to bypass VMware license fees to manage applications, allocate storage and migration among servers.
“OpenStack give you the same functionality but it provides it for free,” Klarer said. “The open virtualization management layer is free. People are seeing a value to these alternatives.”
Ashish Nadkarni, research director for storage software at IDC, said Astute faces an uphill battle even with the new capabilities.
“Their claim to fame is an ASIC. In a world of software-defined storage, where does a company with a custom ASIC go?” he asked.
February 20, 2014 11:22 AM
Posted by: Dave Raffo
QLogic’s acquisition of Ethernet controller assets from Broadcom this week won’t have much immediate effect on storage, but it could become important in a few years.
The $147 million deal will give QLogic 40-gigabit and 100-gigabit Ethernet, RDMA and virtualization technologies it currently lacks. QLogic also gets about 170 engineers in the deal, and claims the Broadcom portfolio makes it No. 2 in the Ethernet controller market behind Intel.
What makes the deal interesting is QLogic’s long-term roadmap. During a conference call to explain the deal, QLogic executives laid out plans to add services such as caching, replication, deduplication, encryption and monitoring on converged Ethernet-Fibre Channel controllers. That would turn QLogic from a network connectivity vendor to a platform vendor.
The first step is to port the Mt. Rainier caching technology used on QLogic’s Fabric Cache FC host bus adapters to Ethernet cards. FabricCache serves as a caching SAN adapter. A Cluster of FabricCache cards can access all the combined caches in that cluster.
The other features on QLogic’s roadmap – all important in storage and data protection – will follow. The process will likely take at least three years to complete the entire list, and a lot can happen in that time. One thing that can happen is features like replication, dedupe and encryption will already be common in storage products by then.
Vikram Karvat, QLogic VP of marketing, admitted those features are likely to be built into high-end storage arrays by then, but he said they will fit into other platforms.
“Some things that are considered high-end features are migrating into other markets, such as private or public clouds, or appliances,” he said. “This will allow non-traditional providers to add them.”
February 19, 2014 8:53 AM
Posted by: Randy Kerns
Based on a review of all-solid state storage systems, it may be time for IT professionals to re-think their strategy regarding the lifespan of storage systems.
The storage system lifespan is a critical part of a storage strategy for scheduling replacements, planning acquisition costs, and in calculating Total Cost of Ownership (TCO). The lifespan is used in the amortization schedule or depreciation for the purchased asset.
Two major technology factors are important to consider when changing the strategy for a storage system lifespan. The first is the use of scale-out storage systems and the ability to replace individual nodes transparently for a technology upgrade. I will deal with this case in a future article. The second factor for consideration is the use of solid state storage.
The lifespan of storage systems does not mean when the system is no longer usable or will have its “use by date” come to an end. It is really about planned replacement with several elements playing into the replacement motivation:
- Wear out. The system may have characteristics such as mechanical wear that causes the failure rate to increase, which leads to more costly service and potential impacts.
- Maintenance costs. There is a warranty period for storage systems and the maintenance costs begin to increase after that period. The costs become more prohibitive due to the increase likelihood of service required.
- Migration of data. Many storage systems do not have transparent or seamless migration of data to a new system. Scheduled replacement does not alleviate the problem but does make it a planned activity.
Solid state storage in the form of NAND flash today has different wear out mechanisms than spinning disks. Most storage vendors have been making tremendous strides in the improvement of managing the wear out issues in NAND flash. They have accomplished this by changing the methods in the way page erase is done and using shadow RAM to minimize the number of erases. These improvements have continued to increase the lifespan for those vendors that have made the technology investment.
Vendors of solid state systems are now quoting longer lifespans for their systems. There are two notable examples:
Pure Storage has changed its upgrade and maintenance model with an approach called Pure Storage Forever Flash. The approach includes a five-year maintenance plan with a free controller upgrade every three years when renewing the maintenance. With every upgrade, the maintenance terms for the storage system can be reset to the current pricing, which is expected to be lower.
Nimbus Data offers up to a 10-year end-to-end warranty for its system.
The warranty period and the upgrade plans will be competitive issues with vendors. Customers will see the economic benefit from the better offerings in TCO calculations and other vendors will have to react. The change can lead to longer term economics for customers with use of solid state storage systems. The change is also an indicator of the evolving technology with solid state NAND Flash. Flash is getting less expensive and use of data reduction is adding to the cost reduction. It is also a reflection of the current methods of purchasing solid state, which is generally for a specific purpose or application workload usage today which leads to a sizing to match those needs. As price changes and application capacity needs increase, another system is purchased, increasing the amount of solid state storage.
For vendors, the longer warranties can be costly if the systems do not live up to the vendor promises. Service Cost Estimates for storage systems are calculated on intrinsic failure rates or demonstrated failure rates where there is field data. The warranty period requires a financial reserve from the price of sales to cover the costs. The decision to extend the warranty is not made lightly by vendors but competitive pressures do factor into the decision.
It should be expected that more all solid state storage system vendors will extend the warranty for the system in some manner. The bigger issue for the customer is the change in the strategy for acquiring storage — and planning for replacement.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
February 14, 2014 10:21 AM
Posted by: Dave Raffo
Tintri CEO Ken Klein says the vendor will use its $75 million funding haul collected this week to grow the company substantially in hopes of going public next year.
Tintri, which sells arrays designed specifically for virtual machines, will use its latest funding to invest in “an aggressive product roadmap, double our sales force, triple our marketing spend and build out our infrastructure to be a public company,” Klein said.
“Our plan is to become the next great storage company.”
Tintri claims revenue from its VMstore storage arrays increased 115 percent last year over 2012, and its customers are running more than 100,000 virtual machines in full production
Klein became CEO last October when founder and former CEO Kieran Harty moved into the CTO role. Klein said no change of direction was required and he was brought on to oversee the company’s growth.
One change in Tintri is it now bills itself as “application-aware” storage instead of “VM-aware storage.” Klein said that’s because Tintri will add features such as application-specific quality of service and other capabilities that focus on applications rather than VMs.
“I believe VMs are proxies for applications,” he said. “We will be adding more capabilities for applications, that’s the direction of the company. We upleveled the messaging.”
As for the aggressive product roadmap, Klein said Tintri will add more enterprise features and cloud functionality. He added, “We’re always looking at designing new platforms” to complement its VMstore arrays.
The E series funding brings Tintri’s total to $135 million, and added Insight Venture Partners to its list of investors. Previous investors Lightspeed Venture, Menlo Ventures and NEA also participated.
February 13, 2014 2:09 PM
Posted by: Dave Raffo
NetApp’s earnings report this week was similar EMC’s report last month. Like EMC, NetApp’s revenue a tad below expected while its forecast missed by a larger margin. The problems are the same – IT people are taking a long look at storage systems before they buy and are buying less. Companies are considering more ways to use the cloud and flash, and that is disrupting traditional storage sales.
NetApp executives are looking to win market share in this new world by helping customers move into the cloud with its OnTap storage operating system – which they now commonly refer to as software-defined storage – and gain performance through a myriad of flash products.
NetApp CEO Tom Georgens said during Wednesday’s earnings call that it is inevitable that the cloud will cut into the amount of storage people buy. The key for storage vendors, he said, will be to help customers better manage the data they move into the cloud.
“We’re not going to deny that data has gone to the cloud,” he said. “So I believe that will depress somewhat the growth rate of the industry or some of the historical norms. But our challenge is to recognize that data is going to go to cloud and the needs of customers to manage that data well and protect that data always gets more acute. So our point of view is we have Clustered OnTap and Data OnTap, and manage that data in our hardware and other people’s hardware.”
“Is the cloud mainstream? No, it’s not. And I think the [storage company] that enables customers to make it mainstream is going to be the winner in this race.”
NetApp’s long-time cloud strategy has been to sell storage to cloud providers. Now it is trying to tie on-premise use of OnTap to public clouds, as NetApp Private Storage for Amazon Web Services (AWS).
Instead of trying to match EMC’s software-defined storage strategy of bringing out a new platform (ViPR), NetApp pledges to do the job with OnTap – especially Clustered OnTap. Georgens said the vendor will soon bring out its first clustered-optimized FAS arrays. Presumably, the next step would be to tie clustered capabilities into its offering for cloud providers.
With flash, NetApp faces intense competition from the large storage vendors as well as startups that beat the big guys into the market with all-flash enterprise arrays. NetApp said it has shipped nearly 75 PB of flash storage, including roughly 25 PB on its all-flash EFF series and FAS arrays fully loaded with flash. NetApp has yet to announce the ship date of its FlashRay all-flash system, which will have storage management features such as data reduction that the EFF lacks. NetApp also sells flash as a cache and server-side flash.
“It’s a jungle out there,” Georgens said of the flash market. As for NetApp’s strategy, he said, “We believe that customers will deploy flash at every layer in the stack to solve a wide variety of challenges. This market is clearly not one-size-fits-all.”
NetApp reported revenue of $1.61 billion, up four percent from the previous quarter but down one percent year-over-year. Wall Street analysts expected $1.63 billion. Its income of $158 million beat expectations, but its guidance of $1.6 billion to $1.72 billion for this quarter was below analyst expectations of $1.73 billion.
Reasons for low guidance include cautious IT spending, particularly by the U.S. federal government and uncertainty at IBM, which sells NetApp FAS and E-Series storage through OEM deals. IBM sold its x-86 server business to Lenovo, which could have an impact storage sales, and it has been emphasizing its internally developed storage over OEM systems.