Monday, July 19, 2010

Patching vmxnet to disable LRO

We've been playing with two Centos 5.5 virtual machines connected over a virtual switch on VMware ESX 4.0 update 2. Unfortunately we experienced very poor TCP performance. I think the same issue affects ESX 4.1 as we found this reference in the release notes:
Poor TCP performance can occur in traffic-forwarding virtual machines with LRO enabled
Some Linux modules cannot handle LRO-generated packets.
As a result, having LRO enabled on a VMXNET 2 or VMXNET 3 device in a traffic
forwarding virtual machine running a Linux guest operating system can cause
poor TCP performance. LRO is enabled by default on these devices.

Workaround: In traffic-forwarding virtual machines running Linux guests,
set the module load time parameter for the VMXNET 2 or VMXNET 3 Linux driver
to include disable_lro=1.

We found that this works:
# rmmod vmxnet
# modprobe vmxnet disable_lro=1

BUT the problem is how to make vmxnet default to disabling LRO when the VM is first booted.

We tried editing '/etc/modprobe.conf' and adding 'options vmxnet disable_lro=1' but this was not sufficent. It seems that the vmxnet module is first loaded during bootup from the initrd.

Our conclusion was that patching the source code of vmxnet was the best way. So here is what we did:
# cd /usr/lib/vmware-tools/modules/source/
# tar xvf vmxnet.tar
# cd vmxnet-only
# grep -n disable_lro *
vmxnet.c:155:static int disable_lro = 0;
vmxnet.c:157: module_param(disable_lro, int, 0);
vmxnet.c:159: MODULE_PARM(disable_lro, "i");
vmxnet.c:931: !disable_lro) {
# cp vmxnet.c vmxnet.c.orig
# chmod +w vmxnet.c
# vi vmxnet.c
# diff -u vmxnet.c.orig vmxnet.c
--- vmxnet.c.orig 2010-07-19 13:20:44.000000000 +0100
+++ vmxnet.c 2010-07-19 13:57:43.000000000 +0100
@@ -152,7 +152,7 @@
#endif // VMXNET_DO_ZERO_COPY

#ifdef VMXNET_DO_TSO
-static int disable_lro = 0;
+static int disable_lro = 1;
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 9)
module_param(disable_lro, int, 0);
#else
@@ -932,6 +932,14 @@
lp->lpd = TRUE;
printk(" lpd");
}
+
+ if (disable_lro) {
+ printk(" disable_lro:1");
+ }
+ else {
+ printk(" disable_lro:0");
+ }
+
#endif
#endif

# cd ..
# mv vmxnet.tar vmxnet.tar.orig
# tar cvf vmxnet.tar vmxnet-only/
# vmware-config-tools.pl -c

The change to the source code was just to change the value of the disable_lro variable from zero to one. And we also added some code to report the value of the variable when the module loads.

To recompile the modules, you will need these rpm packages installed:
gcc, binutils, kernel-devel, kernel-headers

For reference, the version of vmware tools we were using was:
VMwareTools-8195-261974

Of course, if you update vmware-tools, you may need to review this fix, and re-patch the file.

If you are using the vmxnet3 driver, the fix should be similar to the above.

I'd like to thank Michael Melling for helping test the above patch.
Regards
Nigel Smith

5 comments:

crashdump.fr said...

thanks a lot !

Unknown said...

First, thanks! You saved me many headaches. In troubleshooting this further, we found this:

Under the Host Configuration under Software->Advanced Settings. It appears you can disable LRO at the host level. Here, navigate to the Net setting. From there find the various LRO settings.

Currently, we have disabled Net.Vmxnet2SwLRO and Net.VMxnet3SwLRO... and this appears to have corrected the issue for us.

We have had to reboot the host for the above options to take effect. We had to do this at a host level because some of our VMs cannot be patched by us (appliance model stuff).

Unknown said...

Is this necessary if you call mkinitrd to get the updated /etc/modprobe.conf into the initrd?

Kosztyu AndrĂ¡s said...

saved my life, thank you:)
in the end i just put rmmod-modprobe to rc.local

Tim Mothery said...

Glad this blog entry was still here. Was pulling my hair out trying to figure out why gigabit throughput was dismal ( < 100Kb/s).

Tried both fixes a) replacing the vmxnet3 nics with e1000, and b) changing the LRO settings in the esxi host. Both worked equally well to fix the problem. Now have > 40MB/s routing over scp as expected.

Setup is esxi 4.1 host, CentOS 6.2 guest (routing box) connecting 4 virtual switches / subnets.