OpsGuide/Network Troubleshooting

Network troubleshooting can be challenging. A network issue may cause problems at any point in the cloud. Using a logical troubleshooting procedure can help mitigate the issue and isolate where the network issue is. This chapter aims to give you the information you need to identify any issues for  or OpenStack Networking (neutron) with Linux Bridge or Open vSwitch.

Using ip a to Check Interface States
On compute nodes and nodes running, use the following command to see information about interfaces, including information about IPs, VLANs, and whether your interfaces are up:

If you are encountering any sort of networking difficulty, one good initial troubleshooting step is to make sure that your interfaces are up. For example:

You can safely ignore the state of, which is a default bridge created by libvirt and not used by OpenStack.

Visualizing nova-network Traffic in the Cloud
If you are logged in to an instance and ping an external host, for example, Google, the ping packet takes the route shown in Figure. Traffic route for ping packet.

Figure. Traffic route for ping packet

  The instance generates a packet and places it on the virtual Network Interface Card (NIC) inside the instance, such as.   The packet transfers to the virtual NIC of the compute host, such as,. You can find out what vnet NIC is being used by looking at the  file.   From the vnet NIC, the packet transfers to a bridge on the compute node, such as. If you run FlatDHCPManager, one bridge is on the compute node. If you run VlanManager, one bridge exists for each VLAN. To see which bridge the packet will use, run the command: Look for the vnet NIC. You can also reference  and look for the   option.   The packet transfers to the main NIC of the compute node. You can also see this NIC in the brctl output, or you can find it by referencing the  option in.   After the packet is on this NIC, it transfers to the compute node’s default gateway. The packet is now most likely out of your control at this point. The diagram depicts an external gateway. However, in the default configuration with multi-host, the compute host is the gateway.  s Reverse the direction to see the path of a ping reply. From this path, you can see that a single packet travels across four different NICs. If a problem occurs with any of these NICs, a network issue occurs.

Visualizing OpenStack Networking Service Traffic in the Cloud
OpenStack Networking has many more degrees of freedom than  does because of its pluggable back end. It can be configured with open source or vendor proprietary plug-ins that control software defined networking (SDN) hardware or plug-ins that use Linux native facilities on your hosts, such as Open vSwitch or Linux Bridge.

The networking chapter of the OpenStack Administrator Guide shows a variety of networking scenarios and their connection paths. The purpose of this section is to give you the tools to troubleshoot the various components involved however they are plumbed together in your environment.

For this example, we will use the Open vSwitch (OVS) back end. Other back-end plug-ins will have very different flow paths. OVS is the most popularly deployed network driver, according to the April 2016 OpenStack User Survey. We’ll describe each step in turn, with Figure. Neutron network paths for reference.

  The instance generates a packet and places it on the virtual NIC inside the instance, such as eth0.   The packet transfers to a Test Access Point (TAP) device on the compute host, such as tap690466bc-92. You can find out what TAP is being used by looking at the  file. The TAP device name is constructed using the first 11 characters of the port ID (10 hex digits plus an included ‘-‘), so another means of finding the device name is to use the neutron command. This returns a pipe-delimited list, the first item of which is the port ID. For example, to get the port ID associated with IP address 10.0.0.10, do this: Taking the first 11 characters, we can construct a device name of tapff387e54-9e from this output. Figure. Neutron network paths </li>  The TAP device is connected to the integration bridge,. This bridge connects all the instance TAP devices and any other bridges on the system. In this example, we have  and. is one half of a veth pair connecting to the bridge, which handles VLAN networks trunked over the physical Ethernet device. is an Open vSwitch internal port that connects to the  bridge for GRE networks. The TAP devices and veth devices are normal Linux network devices and may be inspected with the usual tools, such as ip and tcpdump. Open vSwitch internal devices, such as, are only visible within the Open vSwitch environment. If you try to run tcpdump -i patch-tun, it will raise an error, saying that the device does not exist. It is possible to watch packets on internal interfaces, but it does take a little bit of networking gymnastics. First you need to create a dummy network device that normal Linux tools can see. Then you need to add it to the bridge containing the internal interface you want to snoop on. Finally, you need to tell Open vSwitch to mirror all traffic to or from the internal port onto this dummy port. After all this, you can then run tcpdump on the dummy interface and see the traffic on the internal port. To capture packets from the patch-tun internal interface on integration bridge, br-int: <ol style="list-style-type: decimal;">  Create and bring up a dummy interface, : </li>  Add device  to bridge  : </li>  Create mirror of  to   (returns UUID of mirror port): </li>  Profit. You can now see traffic on  by running tcpdump -i snooper0. </li>  Clean up by clearing all mirrors on  and deleting the dummy interface: </li></ol>

On the integration bridge, networks are distinguished using internal VLANs regardless of how the networking service defines them. This allows instances on the same host to communicate directly without transiting the rest of the virtual, or physical, network. These internal VLAN IDs are based on the order they are created on the node and may vary between nodes. These IDs are in no way related to the segmentation IDs used in the network definition and on the physical wire. VLAN tags are translated between the external tag defined in the network settings, and internal tags in several places. On the, incoming packets from the   are translated from external tags to internal tags. Other translations also happen on the other bridges and will be discussed in those sections. To discover which internal VLAN tag is in use for a given external VLAN by using the ovs-ofctl command <ol style="list-style-type: decimal;">  Find the external VLAN tag of the network you’re interested in. This is the  as returned by the networking service: </li>  Grep for the, 2113 in this case, in the output of ovs-ofctl dump-flows br-int: Here you can see packets received on port ID 1 with the VLAN tag 2113 are modified to have the internal VLAN tag 7. Digging a little deeper, you can confirm that port 1 is in fact : </li></ol> </li>  The next step depends on whether the virtual network is configured to use 802.1q VLAN tags or GRE: <ol style="list-style-type: decimal;">  VLAN-based networks exit the integration bridge via veth interface  and arrive on the bridge   on the other member of the veth pair. Packets on this interface arrive with internal VLAN tags and are translated to external tags in the reverse of the process described above: Packets, now tagged with the external VLAN tag, then exit onto the physical network via. The Layer2 switch this interface is connected to must be configured to accept traffic with the VLAN ID used. The next hop for this packet must also be on the same layer-2 network. </li>  GRE-based networks are passed with  to the tunnel bridge   on interface. This bridge also contains one port for each GRE tunnel peer, so one for each compute node and network node in your network. The ports are named sequentially from  onward. Matching  interfaces to tunnel endpoints is possible by looking at the Open vSwitch state: In this case,  is a tunnel from IP 10.10.128.21, which should match a local interface on this node, to IP 10.10.128.16 on the remote side. These tunnels use the regular routing tables on the host to route the resulting GRE packet, so there is no requirement that GRE endpoints are all on the same layer-2 network, unlike VLAN encapsulation. All interfaces on the  are internal to Open vSwitch. To monitor traffic on them, you need to set up a mirror port as described above for  in the   bridge. All translation of GRE tunnels to and from internal VLANs happens on this bridge. </li></ol>

To discover which internal VLAN tag is in use for a GRE tunnel by using the ovs-ofctl command <ol style="list-style-type: decimal;">  Find the  of the network you’re interested in. This is the same field used for the VLAN ID in VLAN-based networks: </li>  Grep for 0x< >, 0x3 in this case, in the output of : Here, you see three flows related to this GRE tunnel. The first is the translation from inbound packets with this tunnel ID to internal VLAN ID 1. The second shows a unicast flow to output port 53 for packets destined for MAC address fa:16:3e:a6:48:24. The third shows the translation from the internal VLAN representation to the GRE tunnel ID flooded to all output ports. For further details of the flow descriptions, see the man page for. As in the previous VLAN example, numeric port IDs can be matched with their named representations by examining the output of. </li></ol> </li>  The packet is then received on the network node. Note that any traffic to the l3-agent or dhcp-agent will be visible only within their network namespace. Watching any interfaces outside those namespaces, even those that carry the network traffic, will only show broadcast packets like Address Resolution Protocols (ARPs), but unicast traffic to the router or DHCP address will not be seen. See Dealing with Network Namespaces for detail on how to run commands within these namespaces. Alternatively, it is possible to configure VLAN-based networks to use external routers rather than the l3-agent shown here, so long as the external router is on the same VLAN: <ol style="list-style-type: decimal;"> VLAN-based networks are received as tagged packets on a physical network interface,  in this example. Just as on the compute node, this interface is a member of the  bridge.</li> GRE-based networks will be passed to the tunnel bridge, which behaves just like the GRE interfaces on the compute node.</li></ol> </li> <li> Next, the packets from either input go through the integration bridge, again just as on the compute node. </li> <li> The packet then makes it to the l3-agent. This is actually another TAP device within the router’s network namespace. Router namespaces are named in the form. Running ip a within the namespace will show the TAP device name, qr-e6256f7d-31 in this example: </li> <li> The  interface in the l3-agent router namespace sends the packet on to its next hop through device   on the external bridge. This bridge is constructed similarly to  and may be inspected in the same way. </li> <li> This external bridge also includes a physical network interface,  in this example, which finally lands the packet on the external network destined for an external router or destination. </li> <li> DHCP agents running on OpenStack networks run in namespaces similar to the l3-agents. DHCP namespaces are named  and have a TAP device on the integration bridge. Debugging of DHCP issues usually involves working inside this network namespace. </li></ol>

Finding a Failure in the Path
Use ping to quickly find where a failure exists in the network path. In an instance, first see whether you can ping an external host, such as google.com. If you can, then there shouldn’t be a network problem at all.

If you can’t, try pinging the IP address of the compute node where the instance is hosted. If you can ping this IP, then the problem is somewhere between the compute node and that compute node’s gateway.

If you can’t ping the IP address of the compute node, the problem is between the instance and the compute node. This includes the bridge connecting the compute node’s main NIC with the vnet NIC of the instance.

One last test is to launch a second instance and see whether the two instances can ping each other. If they can, the issue might be related to the firewall on the compute node.

tcpdump
One great, although very in-depth, way of troubleshooting network issues is to use. We recommended using  at several points along the network path to correlate where a problem might be. If you prefer working with a GUI, either live or by using a  capture, check out Wireshark.

For example, run the following command:

Run this on the command line of the following areas:


 * 1) An external server outside of the cloud
 * 2) A compute node
 * 3) An instance running on that compute node

In this example, these locations have the following IP addresses:

Next, open a new shell to the instance and then ping the external host where  is running. If the network path to the external server and back is fully functional, you see something like the following:

On the external server:

On the compute node:

On the instance:

Here, the external server received the ping request and sent a ping reply. On the compute node, you can see that both the ping and ping reply successfully passed through. You might also see duplicate packets on the compute node, as seen above, because  captured the packet on both the bridge and outgoing interface.

iptables
Through  or , OpenStack Compute automatically manages iptables, including forwarding packets to and from instances on a compute node, forwarding floating IP traffic, and managing security group rules. In addition to managing the rules, comments (if supported) will be inserted in the rules to help indicate the purpose of the rule.

The following comments are added to the rule set as appropriate:


 * Perform source NAT on outgoing traffic.
 * Default drop rule for unmatched traffic.
 * Direct traffic from the VM interface to the security group chain.
 * Jump to the VM specific chain.
 * Direct incoming traffic from VM to the security group chain.
 * Allow traffic from defined IP/MAC pairs.
 * Drop traffic without an IP/MAC allow rule.
 * Allow DHCP client traffic.
 * Prevent DHCP Spoofing by VM.
 * Send unmatched traffic to the fallback chain.
 * Drop packets that are not associated with a state.
 * Direct packets associated with a known session to the RETURN chain.
 * Allow IPv6 ICMP traffic to allow RA packets.

Run the following command to view the current iptables configuration:

Note

If you modify the configuration, it reverts the next time you restart  or. You must use OpenStack to manage iptables.

Network Configuration in the Database for nova-network
With, the nova database table contains a few tables with networking information:


 * Contains each possible IP address for the subnet(s) added to Compute. This table is related to the  table by way of the   column.
 * Contains each possible IP address for the subnet(s) added to Compute. This table is related to the  table by way of the   column.


 * Contains each floating IP address that was added to Compute. This table is related to the  table by way of the   column.
 * Contains each floating IP address that was added to Compute. This table is related to the  table by way of the   column.


 * Not entirely network specific, but it contains information about the instance that is utilizing the  and optional.
 * Not entirely network specific, but it contains information about the instance that is utilizing the  and optional.

From these tables, you can see that a floating IP is technically never directly related to an instance; it must always go through a fixed IP.


 * 1) Manually Disassociating a Floating IP¶

Sometimes an instance is terminated but the floating IP was not correctly disassociated from that instance. Because the database is in an inconsistent state, the usual tools to disassociate the IP no longer work. To fix this, you must manually update the database.

First, find the UUID of the instance in question:

Next, find the fixed IP entry for that UUID:

You can now get the related floating IP entry:

And finally, you can disassociate the floating IP:

You can optionally also deallocate the IP from the user’s pool:

Debugging DHCP Issues with nova-network
One common networking problem is that an instance boots successfully but is not reachable because it failed to obtain an IP address from dnsmasq, which is the DHCP server that is launched by the  service.

The simplest way to identify that this is the problem with your instance is to look at the console output of your instance. If DHCP failed, you can retrieve the console log by doing:

If your instance failed to obtain an IP through DHCP, some messages should appear in the console. For example, for the Cirros image, you see output that looks like the following:

After you establish that the instance booted properly, the task is to figure out where the failure is.

A DHCP problem might be caused by a misbehaving dnsmasq process. First, debug by checking logs and then restart the dnsmasq processes only for that project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. Once you have restarted targeted dnsmasq processes, the simplest way to rule out dnsmasq causes is to kill all of the dnsmasq processes on the machine and restart. As a last resort, do this as root:

Note

Use  on RHEL/CentOS/Fedora but   on Ubuntu/Debian.

Several minutes after  is restarted, you should see new dnsmasq processes running:

If your instances are still not able to obtain IP addresses, the next thing to check is whether dnsmasq is seeing the DHCP requests from the instance. On the machine that is running the dnsmasq process, which is the compute host if running in multi-host mode, look at  to see the dnsmasq output. If dnsmasq is seeing the request properly and handing out an IP, the output looks like this:

If you do not see the, a problem exists with the packet getting from the instance to the machine running dnsmasq. If you see all of the preceding output and your instances are still not able to obtain IP addresses, then the packet is able to get from the instance to the host running dnsmasq, but it is not able to make the return trip.

You might also see a message such as this:

This may be a dnsmasq and/or  related issue. (For the preceding example, the problem happened to be that dnsmasq did not have any more IP addresses to give away because there were no more fixed IPs available in the OpenStack Compute database.)

If there’s a suspicious-looking dnsmasq log message, take a look at the command-line arguments to the dnsmasq processes to see if they look correct:

The output looks something like the following:

The output shows three different dnsmasq processes. The dnsmasq process that has the DHCP subnet range of 192.168.122.0 belongs to libvirt and can be ignored. The other two dnsmasq processes belong to. The two processes are actually related—one is simply the parent process of the other. The arguments of the dnsmasq processes should correspond to the details you configured  with.

If the problem does not seem to be related to dnsmasq itself, at this point use  on the interfaces to determine where the packets are getting lost.

DHCP traffic uses UDP. The client sends from port 68 to port 67 on the server. Try to boot a new instance and then systematically listen on the NICs until you identify the one that isn’t seeing the traffic. To use  to listen to ports 67 and 68 on br100, you would do:

You should be doing sanity checks on the interfaces using command such as ip a and brctl show to ensure that the interfaces are actually up and configured the way that you think that they are.

Debugging DNS Issues
If you are able to use SSH to log into an instance, but it takes a very long time (on the order of a minute) to get a prompt, then you might have a DNS issue. The reason a DNS issue can cause this problem is that the SSH server does a reverse DNS lookup on the IP address that you are connecting from. If DNS lookup isn’t working on your instances, then you must wait for the DNS reverse lookup timeout to occur for the SSH login process to complete.

When debugging DNS issues, start by making sure that the host where the dnsmasq process for that instance runs is able to correctly resolve. If the host cannot resolve, then the instances won’t be able to either.

A quick way to check whether DNS is working is to resolve a hostname inside your instance by using the host command. If DNS is working, you should see:

If you’re running the Cirros image, it doesn’t have the “host” program installed, in which case you can use ping to try to access a machine by hostname to see whether it resolves. If DNS is working, the first line of ping would be:

If the instance fails to resolve the hostname, you have a DNS problem. For example:

In an OpenStack cloud, the dnsmasq process acts as the DNS server for the instances in addition to acting as the DHCP server. A misbehaving dnsmasq process may be the source of DNS-related issues inside the instance. As mentioned in the previous section, the simplest way to rule out a misbehaving dnsmasq process is to kill all the dnsmasq processes on the machine and restart. However, be aware that this command affects everyone running instances on this node, including tenants that have not seen the issue. As a last resort, as root:

After the dnsmasq processes start again, check whether DNS is working.

If restarting the dnsmasq process doesn’t fix the issue, you might need to use  to look at the packets to trace where the failure is. The DNS server listens on UDP port 53. You should see the DNS request on the bridge (such as, br100) of your compute node. Let’s say you start listening with  on the compute node:

Then, if you use SSH to log into your instance and try, you should see something like:

Troubleshooting Open vSwitch
Open vSwitch, as used in the previous OpenStack Networking examples is a full-featured multilayer virtual switch licensed under the open source Apache 2.0 license. Full documentation can be found at the project’s website. In practice, given the preceding configuration, the most common issues are being sure that the required bridges (, , and  ) exist and have the proper ports connected to them.

The Open vSwitch driver should and usually does manage this automatically, but it is useful to know how to do this by hand with the ovs-vsctl command. This command has many more subcommands than we will use here; see the man page or use ovs-vsctl --help for the full listing.

To list the bridges on a system, use ovs-vsctl list-br. This example shows a compute node that has an internal bridge and a tunnel bridge. VLAN networks are trunked through the  network interface:

Working from the physical interface inwards, we can see the chain of ports and bridges. First, the bridge, which contains the physical network interface   and the virtual interface  :

Next, the internal bridge,, contains  , which pairs with   to connect to the physical network shown in the previous bridge,  , which is used to connect to the GRE tunnel bridge and the TAP devices that connect to the instances currently running on the system:

The tunnel bridge,, contains the   interface and   interfaces for each peer it connects to via GRE, one for each compute and network node in your cluster:

If any of these links are missing or incorrect, it suggests a configuration error. Bridges can be added with, and ports can be added to bridges with. While running these by hand can be useful debugging, it is imperative that manual changes that you intend to keep be reflected back into your configuration files.

Dealing with Network Namespaces
Linux network namespaces are a kernel feature the networking service uses to support multiple isolated layer-2 networks with overlapping IP address ranges. The support may be disabled, but it is on by default. If it is enabled in your environment, your network nodes will run their dhcp-agents and l3-agents in isolated namespaces. Network interfaces and traffic on those interfaces will not be visible in the default namespace.

To see whether you are using namespaces, run ip netns:

L3-agent router namespaces are named, and dhcp-agent name spaces are named. This output shows a network node with four networks running dhcp-agents, one of which is also running an l3-agent router. It’s important to know which network you need to be working in. A list of existing networks and their UUIDs can be obtained by running  with administrative credentials.

Once you’ve determined which namespace you need to work in, you can use any of the debugging tools mention earlier by prefixing the command with. For example, to see what network interfaces exist in the first qdhcp namespace returned above, do this:

From this you see that the DHCP server on that network is using the  device and has an IP address of. Seeing the address, you can also see that the dhcp-agent is running a metadata-proxy service. Any of the commands mentioned previously in this chapter can be run in the same way. It is also possible to run a shell, such as, and have an interactive session within the namespace. In the latter case, exiting the shell returns you to the top-level default namespace.

Assign a lost IPv4 address back to a project
<ol style="list-style-type: decimal;"> <li> Using administrator credentials, confirm the lost IP address is still available: </li> <li> Create a port: </li> <li> Update the new port with the IPv4 address: </li></ol>

Tools for automated neutron diagnosis
easyOVS is a useful tool when it comes to operating your OpenvSwitch bridges and iptables on your OpenStack platform. It automatically associates the virtual ports with the VM MAC/IP, VLAN tag and namespace information, as well as the iptables rules for VMs.

Don is another convenient network analysis and diagnostic system that provides a completely automated service for verifying and diagnosing the networking functionality provided by OVS.

Additionally, you can refer to neutron debug for more options.