Posted by: Beth Pariseau
After my article the other day about storage pros hoping for a VMware performance boost from pNFS, part of the new NFS 4.1 standard currently being ratified by IETF, I came across a response from Michael Eisler, NetApp’s senior technical director and NFS expert.
On his blog, Eisler writes:
Certainly all hypervisor vendors should have a pNFS client on their roadmap: it would be a neat way to automatically parallelize the I/O (and metadata) of the file systems of legacy guest operating systems that don’t have pNFS (e.g. Windows 2003 guest operating systems use NTFS, which a hypervisor can virtualize today into LUNs or files on a storage server. With pNFS on the hypervisor, the files, directories, block maps, etc. of NTFS would be automatically distributed and striped).
However NIC bonding is a solution to problems that don’t exactly intersect the problems pNFS solves. Going down a pNFS-only route in lieu of NIC bonding would lead to cases where single gigabit Ethernet bandwidth between the hypervisor’s pNFS client and a storage device is still not enough.
By the way, NFSv4.1, which pNFS is a part of, adds the capability to perform trunking at the NFS level. NFSv4.1 adds a session layer. A client establishes a session with an NFSv4.1 server. The client can create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths. I suspect NFSv4.1 trunking has the potential to “steal the show” with respect to current spot light on pNFS within the NFSv4.1 protocol. It will work with or without pNFS.
At any rate, NFSv4.1 trunking would be a way to obviate NIC bonding. Perhaps that is what Ms. Pariseau was alluding to.
Er…not exactly, but I appreciate the clarification.