Storage Soup

A SearchStorage.com blog.

» VIEW ALL POSTS Apr 6 2007   12:54PM GMT

The uncomfortable marriage of Fibre Channel and iSCSI



Posted by: Beth Pariseau
Tags:
Storage protocols (FC / iSCSI)

We received the following comment from a VAR based in Florida on our piece covering the newly proposed Fibre Channel over Ethernet (FCoE) standard:

The concept of FC over Ethernet has very limited value. According to this article, the FCoE consortium is targeting this at low-to-mid range servers, over 10 GbE, and as a convergence technology. While 10 GbE makes sense from the storage array to the switch, it makes little to no sense from the server to the switch for two reasons: first, low to mid range servers — where this is targeted, don’t have the I/O requirements to saturate 1 GbE much less 10 GbE, and their PCI busses would not be able to handle anywhere near 10GbE throughput (do the math), and second, the reason that FCP exists is not because of throughput, but deterministic response time, which is guaranteed by the FC protocol wheras the Ethernet protocol becomes more non-deterministic with high load. This lack of deterministic response time will not be fixed with FCoE.

For these reasons, FCoE to the server does not make sense on the low/mid (don’t need the throughput and couldn’t handle it anyway) or the high end (Ethernet lacks the predictable response time of FCP). So back to the question of FCoE over 10 GbE from the storage array to the switch — if the storage arrays were 10GbE capable, why not just use iSCSI, which is already supported and in wide use in the enterprise despite what some manufacturers’ marketing and media reports say? My personal opinion is that this is an effort on the part of manufacturers who are behind in iSCSI to change the game in an effort to compete, and provides little to no value to consumers.

We have the feeling this could develop into an interesting discussion in the industry over the next year or so as FC and iSCSI, originally at odds in the market, have increasingly been combined in tiered storage environments and multiprotocol systems. Still, combining the two protocols–especially in the same data stream–could become a thorny issue.

What do you think?

6  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Beth Pariseau
    I don’t see much difference with between this and FCIP (tunneling Fibre Channel data over IP networks.) The FCIP Spec. was ratified in 2004 (IETF RFC 3821.) I guess this is the T11’s cut at the same thing. FCIP never got any traction, what makes them think FCoE will? This still requires expensive HBAs for protocol encapsulation into TCP/IP packets and “DMA like” data movement. When you look at the economics of deploying FCoE it may be less expensive to move to 10 Gig—E infrastructure, especially when 10GBASET hits the market. Why pay $2,000 (estimate) for a FCoE adapter when a 10 GBaseT adapter will cost $500 by the time these products hit the market in 2009. Bottomed line: Fibre Channel requires hardware for DMA data movement, whereas Microsoft and others are incorporating DMA functionality into the O/S for iSCSI, and you can always run with just a standard software initiator and NIC. There are no software initiators for FCoE. If there was, it would be iSCSI, because Fibre Channel uses the same SCSI protocol that iSCSI uses. What customer is going to replace all his Fibre Channel HBAs, storage target HW and replace their FC switches with 10 Gig Ethernet/FC hybrid switches or blades when they can implement end-to-end 10 Gig iSCSI for less? I don’t see the ROI or benefit. This protocol will be relegated to remote replication like FCIP.
    0 pointsBadges:
    report
  • Beth Pariseau
    I think FCoE makes good sense for array-to-array replication. On a campus backbone with 10GbE interconnects, it's considerably cheaper than dedicated fiber, just to connect SAN islands. FCoE can leverage an existing (presumably underutilized) 10GbE infrastructure and add significant value. FCoE between the server and storage, however, makes little or no sense, except for specific DR concerns. If a storage array is in a hardened bunker and its server is separated by distance, that could work. 10GbE, however, would clearly be cheaper way to go, however.
    0 pointsBadges:
    report
  • Beth Pariseau
    FCoE is not FC over IP. It requires an enhancement to Ethernet to provide reliable transport at Layer 2. This enhancement solves the determinism problem. As there is no TCP/IP stack required for storage traffic, CPU overhead is not a factor, like with iSCSI. A TOE is not required to move FCoE at high rates. FCoE can exist in a single subnet, and is not routable. It is a short-range protocol, and not an alternative to FC over IP over extended distances, or iSCSI across subnets. FCoE is not suitable for replication. As for FCoE at 10Gb being overkill for servers, the concept is there will be a single (or single pair) of 10Gb Ethernet connections to a server, and all IP and FC traffic will flow over this converged network. Think of a pair of 10Gb Ethernet ports as being an alternative to four 1000BaseT ports and two 4Gb FC ports on a server. Server virtualization is driving up the utilization rates and I/O requirements on servers, so 20Gb of connectivity vs 12Gb (4 x 1000BaseT, plus 2 x 4 Gb FC) seems reasonable.
    0 pointsBadges:
    report
  • Beth Pariseau
    What a piece of nostalgia :-) Around 1997 when a team at IBM Research (Haifa and Almaden) started looking at connecting storage to servers using the "regular network" (the ubiquitous LAN) we considered many alternatives (another team even had a look at ATM - still a computer network candidate at the time). I won't get you over all of our rationale (and we went over some of them again at the end of 1999 with a team from CISCO before we convened the first IETF BOF in 2000 at Adelaide that resulted in iSCSI and all the rest) but some of the reasons we choose to drop Fiber Channel over raw Ethernet where multiple: • Fiber Channel Protocol (SCSI over Fiber Channel Link) is "mildly" effective because: • it implements endpoints in a dedicated engine (Offload) • it has no transport layer (recovery is done at the application layer under the assumption that the error rate will be very low) • the network is limited in physical span and logical span (number of switches) • flow-control/congestion control is achieved with a mechanism adequate for a limited span network (credits). The packet loss rate is almost nil and that allows FCP to avoid using a transport (end-to-end) layer • FCP she switches are simple (addresses are local and the memory requirements cam be limited through the credit mechanism) • However FCP endpoints are inherently costlier than simple NICs – the cost argument (initiators are more expensive) • The credit mechanisms is highly unstable for large networks (check switch vendors planning docs for the network diameter limits) – the scaling argument • The assumption of low losses due to errors might radically change when moving from 1 to 10 Gb/s – the scaling argument • Ethernet has no credit mechanism and any mechanism with a similar effect increases the end point cost. Building a transport layer in the protocol stack has always been the preferred choice of the networking community – the community argument • The "performance penalty" of a complete protocol stack has always been overstated (and overrated). Advances in protocol stack implementation and finer tuning of the congestion control mechanisms make conventional TCP/IP performing well even at 10 Gb/s and over. Moreover the multicore processors that become dominant on the computing scene have enough compute cycles available to make any "offloading" possible as a mere code restructuring exercise (see the stack reports from Intel, IBM etc.) • Building on a complete stack makes available a wealth of operational and management mechanisms built over the years by the networking community (routing, provisioning, security, service location etc.) – the community argument • Higher level storage access over an IP network is widely available and having both block and file served over the same connection with the same support and management structure is compelling – the community argument • Highly efficient networks are easy to build over IP with optimal (shortest path) routing while Layer 2 networks use bridging and are limited by the logical tree structure that bridges must follow. The effort to combine routers and bridges (rbridges) is promising to change that but it will take some time to finalize (and we don't know exactly how it will operate). Untill then the scale of Layer 2 network is going to seriously limited – the scaling argument As a side argument – a performance comparison made in 1998 showed SCSI over TCP (a predecessor of the later iSCSI) to perform better than FCP at 1Gbs for block sizes typical for OLTP (4-8KB). That was what convinced us to take the path that lead to iSCSI – and we used plain vanilla x86 servers with plain-vanilla NICs and Linux (with similar measurements conducted on Windows). The networking and storage community acknowledged those arguments and developed iSCSI and the companion protocols for service discovery, boot etc. The community also acknowledged the need to support existing infrastructure and extend it ina reasonable fashion and developed 2 protocols iFCP (to support hosts with FCP drivers and IP connections to connect to storage by a simple conversion from FCP to TCP packets) FCPIP to extend the reach of FCP through IP (connects FCP islands through TCP links). Both have been implemented and their foundation is solid. The current attempt of developing a "new-age" FCP over an Ethernet link is going against most of the arguments that have given us iSCSI etc. It ignores the networking layering practice, build an application protocol directly above a link and thus limits scaling, mandates elements at the link layer and application layer that make applications more expensive and leaves aside the whole "ecosystem" that accompanies TCP/IP (and not Ethernet). In some related effort (and at a point also when developing iSCSI) we considered also moving away from SCSI (like some "no standardized" but popular in some circles software did – e.g., NBP) but decided against. SCSI is a mature and well understood access architecture for block storage and is implemented by many device vendors. Moving away from it would not have been justified at the time.
    0 pointsBadges:
    report
  • Beth Pariseau
    As and Administrator of storage and network services in a small corporation this proliferation of flavours of "SAN" is becoming a hinderance to our objectives, to move into virtualization. I am getting more and more nervous with the proliferation of remote disk technologies such as iSCSI or FC or FCoE . . .". This as has been stated appears to be a marketing ploy, but also brings in doubt the longevity of any technology we choose. This is in my view criminal as what most businesses turn over in 4 years we normally turn over in 10. Our size and budget still points at iSCSI but we have to be right the first time to allow expansion of storage access as virtualization grows. Forums like this will help, but competing technologies only serve to frustrate the decision making process.
    0 pointsBadges:
    report
  • Beth Pariseau
    As someone that recommends SAN's to customers on a daily basis, I think that fiber is slowly but surely being edged out all but the most demanding environments by iSCSI. Fiber's got a nasty history and anyone thats had to go thru that nightmare will most likely not want to repeat history if given a suitable alternative. iSCSI's purity of transport and protocols have been a welcome relief to us since 2003. No compatibility issues, high per-port costs, host licensing fees, steep learning curves etc.... And some vendors like EqualLogic make a damn fast iSCSI array.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: