Troubleshooting LLDP

why

LLDP is a wonderful protocol which paints a picture of datacenter topology. lldpd is a daemon to run on your servers to receive LLDP frames outputs network location and more.  There’s also a recently patched lldp Ansible module.

Like all tools, using LLDP/lldpd has had some issues. Here’s the ones I’ve seen in practice, with diagnosis and resolution:

Switch isn’t configured to send LLDP frames

Diagnosing:

tcpdump -i eth0 -s 1500 -XX -c 1 'ether proto 0x88cc'

Switches will send the LLDP frames by default every 30s. The switchport’s configuration needs to enable LLDP.

Host isn’t reporting LLDP frames

Generally, this means lldpd isn’t running on the server. If the lldp frames are arriving (from the above tcpdump), but lldpctl will returns nothing.

Diagnosing:

lldpctl # returns nothing
pgrep -f lldpd # returns nothing
service lldpd restart

Be sure that the lldpd service is set to run at boot and take a look at configuration options.

NIC is dropping LLDP frame

By far the most frustrating- NIC firmware issues which can cause the NIC to drop lldp frames. (Page 10, item 14)

The way this one manifests:

  • lldpctl reports nothing
  • lldpd is running
  • switch is configured to send LLDP frames

Diagnosing:

Run a packet capture on the switch to ensure that the LLDP frames are being sent to the port. If you’re able to see the frame go out on the wire and traffic is otherwise functioning normally to the host, the problem lies with the NIC.

The fix here was to apply the NIC firmware upgrade- after that, lldp was good to go!

Advertisements

nsxchecker: Verify the health of your NSX network

screen-grabs-linksys-internet

Recently I got to work with the NSX API and write a tool to do a quick health check of NSX networks.

nsxchecker is a valuable operational tool to quickly report a NSX network’s health.  One of the promises of SDN is automated tooling for operational teams and with the NSX API I was quickly able to deliver.

Screen Shot 2014-10-06 at 17.00.10

nsxchecker accepts a NSX lswitch UUID or a neutron_net_id. Rackspace’s Neutron plugin, quark, tags created lports with a neutron_net_id. nsxchecker requires administrative access to the NSX controllers.

Neutron itself supports probes but it had a couple of drawbacks:

  1. It doesn’t work with all implementations
  2. For a large network, it’s slow

There’s more details in the README on github.

Upgrading Open vSwitch

Operating Open vSwitch brings a new set of challenges.

One of those challenges is managing Open vSwitch itself and making sure you’re up to date with performance and stability fixes. For example, in late 2013 there were significant performance improvements with the release of 1.11 (flow wildcarding!) and in the 2.x series there are even more improvements coming.

This means everyone running those old versions of OVS (I’m looking at you, <=1.6) should upgrade and get these huge performance gains.

There are a few things to be aware of when upgrading OVS:

  1. Reloading the kernel module is a data plane impacting event. It’s minimal. Most won’t notice, and the ones that do only see a quick blip. The duration of the interruption is a function of the number of ports and number of flows before the upgrade.
  2. Along those lines, if you orchestrate OVS kernel module reloads with parallel-ssh or Ansible or really any other tool, be mindful of the connection timeouts. All traffic on the host will be momentarily dropped, including your SSH connection! Set your SSH timeouts appropriately or bad things happen!
  3. Pay very close attention to kernel upgrades and OVS kernel module upgrades. Failure to do so could mean your host networking does not survive a reboot!
  4. Some OVS related changes you’ve made to objects OVS manages outside of OVS/OVSdb, e.g., manual setup of tc buckets will be destroyed.
  5. If you use XenServer, by upgrading OVS beyond what’s delivered from Citrix directly, you’re likely unsupported.

Here is a rough outline of the OVS upgrade process for an individual hypervisor:

  • Obtain Open vSwitch packages
  • Install Open vSwitch userspace components, kernel module(s) (see #3 and “Where things can really go awry”)
  • Load new Open vSwitch kernel module (/etc/init.d/openvswitch force-kmod-reload)
  • Simplified Ansible Playbook: https://gist.github.com/andyhky/9983421

The INSTALL file provides more detailed upgrade instructions. In the old days, upgrading Open vSwitch meant you had to either reboot your host or rebuild all of your flows because of the kernel module reload. After the introduction of the kernel module reloads, the upgrade process is more durable and less impacting.

Where things can really go awry

If your OS has a new kernel pending, e.g., after a XenServer service pack, you will want to install the packages for both your running kernel module and the one which will be running after reboot. Failing to do so can result in losing connectivity to your machine.

hoserville

It is not a guaranteed loss of networking when the Open vSwitch kernel module doesn’t match the xen kernel module, but it is a best practice to ensure they are in lock-step. The cases I’ve seen happen are usually significant version changes, e.g., 1.6 -> 1.11.

You can check if you’re likely to have a problem by running this code (XenServer only, apologies for quick & dirty bash):

#!/usr/bin/env bash
RUNNING_XEN_KERNEL=`uname -r | sed s/xen//`
PENDING_XEN_KERNEL=`readlink /boot/vmlinuz-2.6-xen  | sed s/xen// | sed s/vmlinuz-//`
OVS_BUILD=`/etc/init.d/openvswitch version | grep ovs-vswitchd | awk '{print $NF}'`
rpm -q openvswitch-modules-xen-$RUNNING_XEN_KERNEL-$OVS_BUILD > /dev/null
if [[ $? == 0 ]]
then
    echo "Current kernel and OVS modules match"
else
    CURRENT_MISMATCH=1
    echo "Current kernel and OVS modules do not match"
fi

rpm -q openvswitch-modules-xen-$PENDING_XEN_KERNEL-$OVS_BUILD > /dev/null
if [[ $? == 0 ]]
then
    echo "Pending kernel and OVS modules match"
else
    PENDING_MISMATCH=1
    echo "Pending kernel and OVS will not match after reboot. This can cause system instability."
fi

if [[ $CURRENT_MISMATCH == 1 || $PENDING_MISMATCH == 1 ]]
then
    exit 1
fi

Luckily, this can be rolled back. Access the host via DRAC/iLO and roll back the vmlinuz-2.6-xen symlink in /boot to one that matches your installed openvswitch-modules RPM. I made a quick and dirty bash script which can roll back, but it won’t be too useful unless you put the script on the server beforehand. Here it is (again, XenServer only):

#!/usr/bin/env bash
# Not guaranteed to work. YMMV and all that.
OVS_KERNEL_MODULES=`rpm -qa 'openvswitch-modules-xen*' | sed s/openvswitch-modules-xen-// | cut -d "-" -f1,2;`
XEN_KERNELS=`find /boot -name "vmlinuz*xen" \! -type l -exec ls -ld {} + | awk '{print $NF}'  | cut -d "-" -f2,3 | sed s/xen//`
COMMON_KERNEL_VERSION=`echo $XEN_KERNELS $OVS_KERNEL_MODULES | tr " " "\n"  | sort | uniq -d`
stat /boot/vmlinuz-${COMMON_KERNEL_VERSION}xen > /dev/null
if [[ $? == 0 ]]
then
    rm /boot/vmlinuz-2.6-xen
    ln -s /boot/vmlinuz-${COMMON_KERNEL_VERSION}xen /boot/vmlinuz-2.6-xen
else
    echo "Unable to find kernel version to roll back to! :(:(:(:("
fi

StatsD and multiple metrics

download

Measure all the things! Graphite & statsd are my weapons of choice. One set of metrics in particular that we wanted to measure are the various TCP stats, including TCP Retransmit rate. We crafted a Python script to send all of the metrics in a single UDP packet and hit a weird scenario.

The python script was all ready to roll except that StatsD was only logging one metric.  All of the metric packets were arriving at the StatsD instance, but only one was being processed.

Turns out this wasn’t always built into StatsD. It was added in 0.4.0 and exists in later versions. Upgrading StatsD fixes this problem.

The Host Network Stack

This post is a collection of useful articles/videos that I’ve collected about networking on XenServer and Linux.

XenServer

Linux

As you can see, there are a multitude of elements to consider when looking into host networking issues for a Linux VM running on XenServer (which is Linux underneath the covers anyway).

Network wiring with XenServer and Open vSwitch

In the physical world when you power on a server it’s already cabled (hopefully).

With VMs things are a bit different. Here’s the sequence of events when a VM is started in Nova and what happens on XenServer to wire it up with Open vSwitch.

VM_start

  1. nova-compute starts the VM via XenAPI
  2. XenAPI VM.start creates a domain and creates the VM’s vifs on the hypervisor
  3. The Linux user device manager manages receives this event, and scripts within /etc/udev/rules.d are fired in lexical order
  4. Xen’s vif plug script is fired, which at a minimum creates a port on the relevant virtual switch
    • Newer versions (XS 6.1+) of this plug script also have a setup-vif-rules script which creates several entries in the OpenFlow table (just grabbed from the code comments):
      • Allow DHCP traffic (outgoing UDP on port 67)
      • Filter ARP requests
      • Filter ARP responses
      • Allow traffic from specified ipv4 addresses
      • Neighbour solicitation
      • Neighbour advertisement
      • Allow traffic from specified ipv6 addresses
      • Drop all other neighbour discovery
      • Drop other specific ICMPv6 types
      • Router advertisement
      • Redirect gateway
      • Mobile prefix solicitation
      • Mobile prefix advertisement
      • Multicast router advertisement
      • Multicast router solicitation
      • Multicast router termination
      • Drop everything else
  5. Creation of the port on the virtual switch also adds entries into OVSDB, the database which backs Open vSwitch.
  6. ovs-xapi-sync, which starts on XenAPI/Open vSwitch startup has a local copy of the system’s state in memory. It checks for changes in Bridge/Interface tables, and pulls in XenServer specific data to other columns in those tables.
  7. On many events within OVSDB, including create/update of tables touched in these OVSDB operations, the OVS controller is notified via JSON RPC. Thanks Scott Lowe for clarification on this part.

After all of that happens, the VM boots the guest OS sets up its network stack.

Deep Dive: HTB Rate Limiting (QoS) with Open vSwitch and XenServer

DISCLAIMER: I’m still getting my feet wet with Open vSwitch. This post is just a cleaned up version of my scratchpad.

Open vSwitch has a few ways of providing rate limiting – this deep dive will go into the internals of reverse engineering an existing virtual interface’s egress rate limits applied with tc-htb. Hierarchy Token Bucket (htb) is a standard linux packet scheduling implementation. More reading on HTB can be done on the author’s site – I found the implementation and theory pretty interesting.

This is current as of Open vSwitch 1.9.

The information needed to retrieve htb rate limits mostly lives in the ovsdb:

vswitch-schema
Open vSwitch Schema (http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf)

Things can get complex depending on how your vifs plug into your physical interfaces. In my case, OpenStack Quantum requires an integration bridge which I’ve attempted to diagram:

OpenvSwitchQueues

  1. On instance boot, vifs are plugged into xapi0. xapi0’s controller nodes pull down information including flows and logical queues.
  2. The flows pulled from (1) set the destination queue on all traffic for the source IP address for the interface.
  3. The queue which the traffic gets sent to goes to a linux-htb ring where the packets are scheduled.

Let’s take a look at an example. I want to retrieve the rate limit according to the hypervisor for vif2.1 which connects to xapi0, xenbr1, and the physical interface eth1. The IP address is 10.0.0.37.

Steps:

  • Find the QoS used by the physical interface:
    # ovs-vsctl find Port name=eth1 | grep qos
    qos : 678567ed-9f71-432b-99a2-2f28efced79c

  • Determine which queue is being used for your virtual interface. The value after set_queue is our queue_id.
    # ovs-ofctl dump-flows xapi0 | grep 10.0.0.37 | grep "set_queue"
    ... ,nw_src=10.0.0.37 actions=set_queue:13947, ...
  • List the QoS from the first step and its type. NOTE: This command outputs every single OpenFlow queue_id/OVS Queue UUID for the physical interface. The queue_id from the previous step will be the key we’re interested in and the value is our Queue’s UUID
    # ovs-vsctl list Qos 678567ed-9f71-432b-99a2-2f28efced79c | egrep 'queues|type'
    queues : { ... 13947=787b609b-417c-459f-b9df-9fb5b362e815,... }
    type : linux-htb
  • Use the Queue UUID from the previous step to list the Queue:
    # ovs-vsctl list Queue 787b609b-417c-459f-b9df-9fb5b362e815 | grep other_config
    other_config : {... max-rate="614400000" ...}
  • In order to tie it back to tc-htb we have to convert the OpenFlow queue_id+1 to hexadecimal (367c). I think it’s happening here in the OVS code, but I’d love to have a definitive answer.
    # tc -s -d class show dev eth1 | grep 367c | grep ceil # Queue ID + 1 in Hex
    class htb 1:367c ... ceil 614400Kbit

How We Found Our Virtual Networking Mojo

Switch and Network Adapter Fault Tolerance

Each of the VMware ESX hosts that we had were equipped with dual Network Adapters (NICs). With a typical physical server, two NICs could demonstrate fault tolerance. However, for ESX hosts the dual NIC is not fault tolerant. VMware ESX has three major types of traffic:

  1. VMkernel – used for vMotion, which allows host downtime without an interruption of service
  2. Service Console – initiates vMotion, serves as the primary venue of managing Virtual Machines
  3. VM Traffic – each individual virtual machine’s traffic, e.g., a web server’s incoming /outgoing requests

Our previous setup had significant points of failure:

  • If one NIC failed there would be complete service interruption.
    • If VMNIC0 failed, the interruption is slightly more controllable as it could be scheduled even though the virtual machines would not be manageable.
    • If VMNIC1 failed, all virtual machine traffic would be unexpectedly interrupted.
  • If the physical switch failed there would be an unscheduled complete service interruption.

Our old VMware Networking setup, no NIC/switch redundancy

 

The plan was to purchase an additional quad port NIC to remedy the NIC redundancy issue. With the additional 4 ports, the three types of traffic could have two dedicated ports. This setup is fault tolerant for NICs with each of the three traffic types. It’s taking the Cisco ESX Hosts with 4 NICs (page 62) to the next level. To fix the switch redundancy problem, the NICs are evenly split across two switches, per traffic type.

Networking setup demonstrating switch/NIC redundancy

 

Networking Performance

With the previous setup, we had an average of 7.5 VM’s going through one NIC per host. The NICs were being overused. Users were complaining of slow network performance. We implemented NIC teaming, while being afforded failover with favorable results.

cacti

Cacti Diagram showing before and after on an ESX host

In the above pic, a drastic decrease in utilization is shown after Week 32. This demonstrates the decrease in stress on the NIC that was being over utilized.