The Virtualization Room

Jan 10 2012   8:32PM GMT

Whither VXLAN?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I’ve been looking more deeply into the proposed IETF standard VXLAN of late, but my reading has left me with more questions than answers.

VXLAN, or Virtual eXtensible LAN, submitted to IETF last fall and talked up at last year’s VMWorld, is a protocol for routing Layer 2 traffic over Layer 3 networks, with the goal of either expanding the available VLAN address space, or supporting inter-data center VM mobility, depending on who you ask. It’s also important to note that VXLAN, for now, is still a proposed standard that remains largely theoretical.

Recently, a blog post by Scott Lowe caught my interest by pointing out some key differences between VXLAN and Cisco’s Overlay Transport Virtualization (OTV) protocol, which also encapsulates layer 2 frames in layer 3 packets.

Under VXLAN, this encapsulation is performed at what are called VXLAN Tunnel End Points (VTEPs), which operate at the host level; under OTV, these endpoints lie in Nexus 7000 switches.

Lowe’s post illustrates some important hurdles VXLAN has to cross before it can perform inter-data-center VM migrations without a “traffic trombone” issue. Namely, vShield Edge, which acts as the default gateway in VXLAN environments, currently cannot be made redundant between sites.

VMware’s Duncan Epping appears in the comments on Lowe’s blog saying VMware is working on addressing this, but for now, this would mean that in theoretical VXLAN-land, traffic meant for a VM live-migrated to Site B would still have to pass through a vShield Edge device at Site A. An OTV deployment that follows Cisco’s recommended practices does not have this particular problem.

So, let’s assume that by the time VXLAN sees the light of day, the vShield Edge redundancy issue has been resolved, and VXLAN also has some equivalent to OTV’s use of HSRP and identical default gateway addresses at each location, so that traffic no longer has to ‘trombone’ between data centers when workloads are moved. Hasn’t it then just reinvented the OTV wheel?

Some might say that the advantage VXLAN has here vs. OTV is that VXLAN is on the standards track while OTV is not. But this is also where other experts, like blogger Denton Gentry, point out that tunneling VXLAN-style still comes with tradeoffs – namely, some trickiness when it comes to traversing firewalls.

the way VXLAN uses its UDP header can make firewall traversal a bit more challenging. The inner packet headers can hash to a well known UDP port number like 53, making it look like a DNS response, but a firewall attempting to inspect the contents of the frame will not find a valid DNS packet. It would be important to disable any deep packet inspection for packets traveling between VTEP endpoints. If VXLAN is used to extend an L2 network all the way across a WAN the firewall question becomes more interesting.

Some experts have also pointed out in previous discussions about VXLAN that creating tunnels with endpoints at the host level can also block visibility into the network for other kinds of third-party management and security tools which use packet scanning and analysis.

A way around some of these problems might be for VXLAN to be integrated into physical networking equipment, rather than encapsulating and decapsulating packets at host-based endpoints. But is integration into physical networking equipment even a likely scenario for VXLAN? At least right now, the fact that VXLAN doesn’t rely on physical networking equipment appears to be a significant difference between it and other protocols like OTV. Is this intentional? In other words, is VXLAN meant to be a kind of shortcut that bypasses the need for retooled physical networking equipment to create inter-data-center tunnels? If so, to what end? And how would visibility issues then be addressed?

After venturing down these various rabbit holes, you could just revert to characterizing VXLAN as an attempt to enlarge the VLAN address space – since it adds a 24-bit VXLAN Network Identifier (VNI) to each packet header, there can be 2^24, or 16 million, VXLAN segments, a substantial increase over the current VLAN limit of 4096 segments.

But to me, this would beg the question, Why all the talk about VM mobility with VXLAN in the first place?

1  Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Desmoden
    Couple points 1.) The VXLAN Draft was not submitted with the intended status "proposed standard", it was submitted as "experimental". 2.) There is no working group, AD assigned, or Charter for VXLAN. 3.) Both SPB & TRILL offer support for 2^24 VLANs to break the 4096 limit. 4.) The current 00 draft expires Feb 26th 2012 5.) When considering such things also consider NVGRE and read up on what's happening in L2VPN and L3VPN. 6.) www.ietf.org is your friend. That is not to say they VXLAN won't get a Charter, and AD and all it needs to grown into a big strong working group with a intended status of proposed standard. It might. However to say it's a proposed standard when there is no charter and no working group is a wee bit ahead of the cart. oh! and these are NOT pokes at the above post. The above post is quite good. This is just information to expand on what is above for those who are curious.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: