Posted by: Beth Pariseau
when relevant content is
added and updated.
I’ve been looking more deeply into the proposed IETF standard VXLAN of late, but my reading has left me with more questions than answers.
VXLAN, or Virtual eXtensible LAN, submitted to IETF last fall and talked up at last year’s VMWorld, is a protocol for routing Layer 2 traffic over Layer 3 networks, with the goal of either expanding the available VLAN address space, or supporting inter-data center VM mobility, depending on who you ask. It’s also important to note that VXLAN, for now, is still a proposed standard that remains largely theoretical.
Recently, a blog post by Scott Lowe caught my interest by pointing out some key differences between VXLAN and Cisco’s Overlay Transport Virtualization (OTV) protocol, which also encapsulates layer 2 frames in layer 3 packets.
Under VXLAN, this encapsulation is performed at what are called VXLAN Tunnel End Points (VTEPs), which operate at the host level; under OTV, these endpoints lie in Nexus 7000 switches.
Lowe’s post illustrates some important hurdles VXLAN has to cross before it can perform inter-data-center VM migrations without a “traffic trombone” issue. Namely, vShield Edge, which acts as the default gateway in VXLAN environments, currently cannot be made redundant between sites.
VMware’s Duncan Epping appears in the comments on Lowe’s blog saying VMware is working on addressing this, but for now, this would mean that in theoretical VXLAN-land, traffic meant for a VM live-migrated to Site B would still have to pass through a vShield Edge device at Site A. An OTV deployment that follows Cisco’s recommended practices does not have this particular problem.
So, let’s assume that by the time VXLAN sees the light of day, the vShield Edge redundancy issue has been resolved, and VXLAN also has some equivalent to OTV’s use of HSRP and identical default gateway addresses at each location, so that traffic no longer has to ‘trombone’ between data centers when workloads are moved. Hasn’t it then just reinvented the OTV wheel?
Some might say that the advantage VXLAN has here vs. OTV is that VXLAN is on the standards track while OTV is not. But this is also where other experts, like blogger Denton Gentry, point out that tunneling VXLAN-style still comes with tradeoffs – namely, some trickiness when it comes to traversing firewalls.
the way VXLAN uses its UDP header can make firewall traversal a bit more challenging. The inner packet headers can hash to a well known UDP port number like 53, making it look like a DNS response, but a firewall attempting to inspect the contents of the frame will not find a valid DNS packet. It would be important to disable any deep packet inspection for packets traveling between VTEP endpoints. If VXLAN is used to extend an L2 network all the way across a WAN the firewall question becomes more interesting.
Some experts have also pointed out in previous discussions about VXLAN that creating tunnels with endpoints at the host level can also block visibility into the network for other kinds of third-party management and security tools which use packet scanning and analysis.
A way around some of these problems might be for VXLAN to be integrated into physical networking equipment, rather than encapsulating and decapsulating packets at host-based endpoints. But is integration into physical networking equipment even a likely scenario for VXLAN? At least right now, the fact that VXLAN doesn’t rely on physical networking equipment appears to be a significant difference between it and other protocols like OTV. Is this intentional? In other words, is VXLAN meant to be a kind of shortcut that bypasses the need for retooled physical networking equipment to create inter-data-center tunnels? If so, to what end? And how would visibility issues then be addressed?
After venturing down these various rabbit holes, you could just revert to characterizing VXLAN as an attempt to enlarge the VLAN address space – since it adds a 24-bit VXLAN Network Identifier (VNI) to each packet header, there can be 2^24, or 16 million, VXLAN segments, a substantial increase over the current VLAN limit of 4096 segments.
But to me, this would beg the question, Why all the talk about VM mobility with VXLAN in the first place?